Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lincheck benchmarks #250

Open
wants to merge 18 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 96 additions & 0 deletions build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,14 @@ import org.jetbrains.kotlin.gradle.*
// atomicfu
buildscript {
val atomicfuVersion: String by project
val serializationPluginVersion: String by project
dependencies {
classpath("org.jetbrains.kotlinx:atomicfu-gradle-plugin:$atomicfuVersion")
classpath("org.jetbrains.kotlin:kotlin-serialization:$serializationPluginVersion")
}
}
apply(plugin = "kotlinx-atomicfu")
apply(plugin = "kotlinx-serialization")

plugins {
java
Expand All @@ -29,6 +32,9 @@ kotlin {
allWarningsAsErrors = true
}

// we have to create custom sourceSets in advance before defining corresponding compilation targets
sourceSets.create("jvmBenchmark")

jvm {
withJava()

Expand All @@ -39,6 +45,51 @@ kotlin {
val test by compilations.getting {
kotlinOptions.jvmTarget = "1.8"
}

val benchmark by compilations.creating {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please take a look how it is done in the Kotlin coroutines repo

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please point out to some specific part or example of gradle scripts you mean, or describe what is the problem with the current code.

The kotlin coroutines build setup is quite large and complicated, what exactly should I look for?

It looks like they use subprojects approach, and benchmarks, in particular, is one of the subprojects.

Is that what you mean? If yes, what are the benefits of using subprojects for benchmarks compared to having them as another source set?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making it a subproject makes the build script simpler and modularized, with all the dependencies and hacks necessary for benchmarks being configured for benchmarks only and in a separate file.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, it is much shorter 🙃

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, will try it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried the sub-projects approach and it haven't worked out.

The problem is that if we extract the benchmarks into a separate sub-project, then we have to add a dependency from benchmarks sub-module to a the lincheck itself.
But, because the lincheck itself is not a sub-project, but a top-level project, we cannot do (i.e., adding dependency { implementation(rootProject) } does not work).

It seems that currently Lincheck project structure does not adhere to the standard recommended project structure, where the top-level build.gradle.kts simply imports sub-projects and defines common configuration, and individual sub-projects' build.gradle.kts files define build scripts for sub-components, like lib, app, benchmarks, etc: https://docs.gradle.org/current/userguide/multi_project_builds.html

The kotlinx-coroutines project also follows this convention, and thus we cannot simply re-use it in Lincheck.
It defines a kotlinx-coroutines-core sub-project with core coroutine library primitives.
The benchmarks sub-project then
declares a dependency on the core subproject.

And it looks like other kotlin libraries follow the same approach, where the top-level build.gradle.kts is only an "umbrella" script and everything else is split into sub-projects. Most libraries also define "core" sub-project, which is either named core or $library-name:

For us it means that we would have to move the current top-level build.gradle.kts file into a new "core" sub-project, which we could name either lincheck, lincheck-core, or core.
But this is a major change, which I think we should implement in a separate PR, not as a part of benchmarks PR.
Also this change is likely to produce a lot of merge-conflicts with existing active feature branches, so we should plan it thoughtfully and coordinate with other folks who currently work on various features in Lincheck.

I have very limited knowledge of gradle, so perhaps there is some way to overcome the problems described above. But I do not know how to do without major refactoring of project structure.

kotlinOptions.jvmTarget = "1.8"

defaultSourceSet {
dependencies {
implementation(main.compileDependencyFiles + main.output.classesDirs)
}
}

val benchmarksClassPath =
compileDependencyFiles +
runtimeDependencyFiles +
output.allOutputs +
files("$buildDir/processedResources/jvm/main")

val benchmarksTestClassesDirs = output.classesDirs

// task allowing to run benchmarks using JUnit API
val benchmark = tasks.register<Test>("jvmBenchmark") {
classpath = benchmarksClassPath
testClassesDirs = benchmarksTestClassesDirs
dependsOn("processResources")
}

// task aggregating all benchmarks into a single suite and producing custom reports
val benchmarkSuite = tasks.register<Test>("jvmBenchmarkSuite") {
classpath = benchmarksClassPath
testClassesDirs = benchmarksTestClassesDirs
filter {
includeTestsMatching("LincheckBenchmarkSuite")
}
// pass the properties
systemProperty("statisticsGranularity", System.getProperty("statisticsGranularity"))
// always re-run test suite
outputs.upToDateWhen { false }
dependsOn("processResources")
}

// task producing plots given the benchmarks report file
val benchmarkPlots by tasks.register<JavaExec>("runBenchmarkPlots") {
classpath = benchmarksClassPath
mainClass.set("org.jetbrains.kotlinx.lincheck_benchmark.PlotsKt")
}
}
}

sourceSets {
Expand Down Expand Up @@ -80,6 +131,51 @@ kotlin {
implementation("io.mockk:mockk:${mockkVersion}")
}
}

val jvmBenchmark by getting {
kotlin.srcDirs("src/jvm/benchmark")

val junitVersion: String by project
val jctoolsVersion: String by project
val serializationVersion: String by project
val letsPlotVersion: String by project
val letsPlotKotlinVersion: String by project
val cliktVersion: String by project
dependencies {
compileOnly(project(":bootstrap"))
implementation("junit:junit:$junitVersion")
implementation("org.jctools:jctools-core:$jctoolsVersion")
implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:$serializationVersion")
implementation("org.jetbrains.lets-plot:lets-plot-common:$letsPlotVersion")
implementation("org.jetbrains.lets-plot:lets-plot-kotlin-jvm:$letsPlotKotlinVersion")
implementation("com.github.ajalt.clikt:clikt:$cliktVersion")

/* We need the following line because apparently there is some issue
* with the Kotlin Multiplatform gradle plugin and our non-standard project structure
* with an additional jvmBenchmark source set that should depend on the jvmMain source set.
*
* If we use `jvmBenchmark.dependsOn(jvmMain)`, as the documentation suggests (see link below),
* then the imports in IDEA are working, but running the benchmarks
* results in `java.lang.NoSuchMethodError` exception on call to any method form Lincheck (jvmMain).
* It looks like there are some issues with classloading, as the `jvmBenchmark` classes do not see
* classes from `jvmMain`.
*
* https://kotlinlang.org/docs/multiplatform-advanced-project-structure.html#dependson-and-source-set-hierarchies
*
* If we do not use `jvmBenchmark.dependsOn(jvmMain)`,
* then gradle correctly compiles the benchmarks, and they run with no errors.
* But the imports from `lincheck` package in the benchmarks
* do not work in IDEA (so IDEA cannot resolve classes from `jvmMain`).
*
* To bypass this, we add an implementation dependency on
* the root project for the `jvmBenchmark` source set.
* This way benchmarks are compiled, run with no errors,
* and import and resolve work correctly in IDEA.
*/
implementation(rootProject)
}
}
// jvmBenchmark.dependsOn(jvmMain)
}
}

Expand Down
6 changes: 6 additions & 0 deletions gradle.properties
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,15 @@ withEventIdSequentialCheck=false

kotlinVersion=1.9.21
kotlinxCoroutinesVersion=1.7.3

asmVersion=9.6
atomicfuVersion=0.20.2
byteBuddyVersion=1.14.12
serializationPluginVersion=1.6.21
serializationVersion=1.3.3
letsPlotVersion=2.5.0
letsPlotKotlinVersion=4.0.0
cliktVersion=3.4.0

junitVersion=4.13.1
jctoolsVersion=3.3.0
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
/*
* Lincheck
*
* Copyright (C) 2019 - 2023 JetBrains s.r.o.
*
* This Source Code Form is subject to the terms of the
* Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed
* with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/

package org.jetbrains.kotlinx.lincheck_benchmark

import org.jetbrains.kotlinx.lincheck.*
import org.jetbrains.kotlinx.lincheck.strategy.*
import org.jetbrains.kotlinx.lincheck.strategy.managed.modelchecking.ModelCheckingOptions
import org.jetbrains.kotlinx.lincheck.strategy.stress.StressOptions
import kotlin.reflect.KClass
import org.junit.Test


abstract class AbstractLincheckBenchmark(
private vararg val expectedFailures: KClass<out LincheckFailure>
) {

@Test(timeout = TIMEOUT)
ndkoval marked this conversation as resolved.
Show resolved Hide resolved
fun benchmarkWithStressStrategy(): Unit = StressOptions().run {
invocationsPerIteration(5_000)
ndkoval marked this conversation as resolved.
Show resolved Hide resolved
configure()
runTest()
}

@Test(timeout = TIMEOUT)
fun benchmarkWithModelCheckingStrategy(): Unit = ModelCheckingOptions().run {
invocationsPerIteration(5_000)
configure()
runTest()
}

private fun <O : Options<O, *>> O.runTest() {
val statisticsTracker = LincheckStatisticsTracker(
granularity = benchmarksReporter.granularity
)
val klass = this@AbstractLincheckBenchmark::class
val checker = LinChecker(klass.java, this)
val failure =
@Suppress("INVISIBLE_MEMBER") // `checkImpl` API is currently internal in the Lincheck module
checker.checkImpl(customTracker = statisticsTracker)
if (failure == null) {
assert(expectedFailures.isEmpty()) {
"This test should fail, but no error has been occurred (see the logs for details)"
}
} else {
assert(expectedFailures.contains(failure::class)) {
"This test has failed with an unexpected error: \n $failure"
}
}
val statistics = statisticsTracker.toBenchmarkStatistics(
name = klass.simpleName!!.removeSuffix("Benchmark"),
strategy = when (this) {
is StressOptions -> LincheckStrategy.Stress
is ModelCheckingOptions -> LincheckStrategy.ModelChecking
else -> throw IllegalStateException("Unsupported Lincheck strategy")
}
)
benchmarksReporter.registerBenchmark(statistics)
}

private fun <O : Options<O, *>> O.configure(): Unit = run {
iterations(30)
threads(3)
actorsPerThread(2)
actorsBefore(2)
actorsAfter(2)
minimizeFailedScenario(false)
customize()
}

internal open fun <O: Options<O, *>> O.customize() {}

}

private const val TIMEOUT = 5 * 60 * 1000L // 5 minutes
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
/*
* Lincheck
*
* Copyright (C) 2019 - 2023 JetBrains s.r.o.
*
* This Source Code Form is subject to the terms of the
* Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed
* with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/

@file:Suppress("INVISIBLE_REFERENCE", "INVISIBLE_MEMBER")

package org.jetbrains.kotlinx.lincheck_benchmark

import org.jetbrains.kotlinx.lincheck.*
import kotlinx.serialization.Serializable
import kotlinx.serialization.json.Json
import kotlinx.serialization.json.encodeToStream
import kotlin.time.Duration.Companion.nanoseconds
import kotlin.time.DurationUnit
import java.io.File


typealias BenchmarkID = String

@Serializable
data class BenchmarksReport(
val data: Map<String, BenchmarkStatistics>
)

@Serializable
data class BenchmarkStatistics(
val name: String,
val strategy: LincheckStrategy,
val runningTimeNano: Long,
val iterationsCount: Int,
val invocationsCount: Int,
val scenariosStatistics: List<ScenarioStatistics>,
val invocationsRunningTimeNano: LongArray,
)

@Serializable
data class ScenarioStatistics(
val threads: Int,
val operations: Int,
val invocationsCount: Int,
val runningTimeNano: Long,
val invocationAverageTimeNano: Long,
val invocationStandardErrorTimeNano: Long,
)

val BenchmarksReport.benchmarkIDs: List<BenchmarkID>
get() = data.keys.toList()

val BenchmarksReport.benchmarkNames: List<String>
get() = data.map { (_, statistics) -> statistics.name }.distinct()

val BenchmarkStatistics.id: BenchmarkID
get() = "$name-$strategy"

fun LincheckStatistics.toBenchmarkStatistics(name: String, strategy: LincheckStrategy) = BenchmarkStatistics(
name = name,
strategy = strategy,
runningTimeNano = runningTimeNano,
iterationsCount = iterationsCount,
invocationsCount = invocationsCount,
invocationsRunningTimeNano = iterationsStatistics
.values.map { it.invocationsRunningTimeNano }
.flatten(),
scenariosStatistics = iterationsStatistics
.values.groupBy { (it.scenario.nThreads to it.scenario.parallelExecution[0].size) }
.map { (key, statistics) ->
val (threads, operations) = key
val invocationsRunningTime = statistics
.map { it.invocationsRunningTimeNano }
.flatten()
val invocationsCount = statistics.sumOf { it.invocationsCount }
val runningTimeNano = statistics.sumOf { it.runningTimeNano }
val invocationAverageTimeNano = when {
// handle the case when per-invocation statistics is not gathered
invocationsRunningTime.isEmpty() -> (runningTimeNano.toDouble() / invocationsCount).toLong()
else -> invocationsRunningTime.average().toLong()
}
val invocationStandardErrorTimeNano = when {
// if per-invocation statistics is not gathered we cannot compute standard error
invocationsRunningTime.isEmpty() -> -1L
else -> invocationsRunningTime.standardError().toLong()
}
ScenarioStatistics(
threads = threads,
operations = operations,
invocationsCount = invocationsCount,
runningTimeNano = runningTimeNano,
invocationAverageTimeNano = invocationAverageTimeNano,
invocationStandardErrorTimeNano = invocationStandardErrorTimeNano,
)
}
)

@OptIn(kotlinx.serialization.ExperimentalSerializationApi::class)
fun BenchmarksReport.saveJson(filename: String) {
val file = File("$filename.json")
file.outputStream().use { outputStream ->
Json.encodeToStream(this, outputStream)
}
}

// saves the report in simple text format for testing integration with ij-perf dashboards
fun BenchmarksReport.saveTxt(filename: String) {
val text = StringBuilder().apply {
appendReportHeader()
for (benchmarkStatistics in data.values) {
// for ij-perf reports, we currently track only benchmarks overall running time
appendBenchmarkRunningTime(benchmarkStatistics)
}
}.toString()
val file = File("$filename.txt")
file.writeText(text, charset = Charsets.US_ASCII)
}

private fun StringBuilder.appendReportHeader() {
appendLine("Lincheck benchmarks suite")
}

private fun StringBuilder.appendBenchmarkRunningTime(benchmarkStatistics: BenchmarkStatistics) {
with(benchmarkStatistics) {
val runningTimeMs = runningTimeNano.nanoseconds.toLong(DurationUnit.MILLISECONDS)
appendLine("${strategy}.${name}.runtime.ms $runningTimeMs")
}
}
Loading