Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8325673: GenShen: Share Reserves between Old and Young Collector #395

Closed
wants to merge 82 commits into from
Closed
Show file tree
Hide file tree
Changes from 75 commits
Commits
Show all changes
82 commits
Select commit Hold shift + click to select a range
c933d75
Share reserves between Young Collector and Old Collector
kdnilsen Feb 12, 2024
dd2a179
Refinements and instrumentation to diagnose misbehavior
kdnilsen Feb 19, 2024
d91bcef
Allow old-gen to expand when mutator memory is available
kdnilsen Feb 12, 2024
c2cc3f6
Fix merge conflicts
kdnilsen Feb 19, 2024
a794ca1
Refine calculation of max_old_reserve
kdnilsen Feb 20, 2024
6509bde
Reduce default value of ShenandoahOldEvacRatioPercent
kdnilsen Feb 21, 2024
c8e4555
Turn off debug instrumentation
kdnilsen Feb 21, 2024
ce5d335
Fix multiple errors in impelmentation of freeset rebuild
kdnilsen Feb 25, 2024
7bb1d38
Remove dead code for inelastic plabs
kdnilsen Feb 26, 2024
8bc4367
Revert "Remove dead code for inelastic plabs"
kdnilsen Feb 26, 2024
99cce53
Round LAB sizes down rather than up to force alignment
kdnilsen Feb 26, 2024
11b26bb
Revert "Round LAB sizes down rather than up to force alignment"
kdnilsen Feb 26, 2024
941d8aa
Merge branch 'openjdk:master' into master
kdnilsen Feb 27, 2024
f0b15ac
Make evacuation reserve quantities always valid
kdnilsen Feb 27, 2024
39c5885
Merge branch 'openjdk:master' into master
kdnilsen Mar 13, 2024
28a382b
Make satb-mode Info logging less verbose
kdnilsen Mar 13, 2024
a43675a
Merge branch 'openjdk:master' into master
kdnilsen Apr 10, 2024
d881300
Change behavior of max_old and min_old
kdnilsen Apr 11, 2024
c2cb1b7
Revert "Change behavior of max_old and min_old"
kdnilsen Apr 11, 2024
141fec1
Merge branch 'openjdk:master' into master
kdnilsen Apr 15, 2024
bac08f0
Merge branch 'openjdk:master' into master
kdnilsen Apr 23, 2024
44c0c41
Merge remote-tracking branch 'origin/master' into share-collector-res…
kdnilsen Apr 29, 2024
84f27d7
Merge branch 'openjdk:master' into master
kdnilsen May 15, 2024
fecd9a0
Fixup some conflicts introduced by merge from upstream
kdnilsen May 31, 2024
669be0b
Do not plan to xfer Collector reserves unless they are unaffiliated
kdnilsen Jun 3, 2024
fb259d3
Resolve regressions with TestThreadFailure
kdnilsen Jun 4, 2024
d32f428
Change default ratio of old vs young evacuation
kdnilsen Jun 5, 2024
a57805f
Remove debug instrumentation
kdnilsen Jun 6, 2024
118f5b1
Merge branch 'openjdk:master' into master
kdnilsen Jun 6, 2024
6f57068
Merge remote-tracking branch 'origin/master' into share-collector-res…
kdnilsen Jun 6, 2024
358d2f7
Top off old evacuation regions for mixed evacuations
kdnilsen Jun 10, 2024
8fbb0f5
Change default percentage of old-gen evacuation
kdnilsen Jun 10, 2024
f90ea26
Change default old-gen ratio and comment
kdnilsen Jun 10, 2024
1cd1105
Remove over-zealous assert and replace with comment
kdnilsen Jun 10, 2024
786b27f
Update default value and comment
kdnilsen Jun 11, 2024
5312029
Merge branch 'openjdk:master' into master
kdnilsen Jun 11, 2024
10d992d
Further adjustments to default Old/Young Ratio
kdnilsen Jun 12, 2024
56567b0
Merge branch 'openjdk:master' into master
kdnilsen Jun 13, 2024
f3c6e09
Performance improvements
kdnilsen Jun 17, 2024
85a0d90
Always allow promotions into fragmented old-gen
kdnilsen Jun 17, 2024
f87a549
Set promo_reserve to max of existing fragmented old-gen and promo need
kdnilsen Jun 18, 2024
eb0ec64
Fail faster with OOME if alloc fails following full gc
kdnilsen Jun 18, 2024
1c59394
Fix over-zealous assertion and broken code surrounding it
kdnilsen Jun 19, 2024
4dcbdce
A few more fixes to computation of old-gen sizes at end of gc
kdnilsen Jun 19, 2024
c407dbd
Adjust collector reserves downward when resources are insufficient
kdnilsen Jun 21, 2024
39e02f1
Fix white space
kdnilsen Jun 24, 2024
d02da9c
Merge branch 'master' of https://git.openjdk.org/shenandoah into shar…
kdnilsen Jun 24, 2024
e1aa848
Fix argument list after manual merge conflict resolution
kdnilsen Jun 24, 2024
25ee3f5
Merge branch 'openjdk:master' into master
kdnilsen Jun 26, 2024
54df079
Do not access young_gen or old_gen in non-generational mode
kdnilsen Jun 27, 2024
60a51cb
Merge remote-tracking branch 'origin/master' into share-collector-res…
kdnilsen Jun 28, 2024
2b3afe7
Fix TestGCOldWithShenandoah#generational regression
kdnilsen Jul 1, 2024
52a3b36
Remove reset_evacuation_reserves
kdnilsen Jul 3, 2024
3f514af
reserve until available in partition is sufficient
kdnilsen Jul 3, 2024
1378ef6
Include old_cset regions in old_available when computing reserves
kdnilsen Jul 3, 2024
04d9c08
Reset live_bytes_in_unprocessed_candidates when abandon_collection_ca…
kdnilsen Jul 4, 2024
7ab343b
Rebuild free set consistently following abbreviated and old mark cycles
kdnilsen Jul 5, 2024
b61679a
Fix up some assertions
kdnilsen Jul 5, 2024
6cef4b4
Do not top-off beyond available unaffiliated young regions
kdnilsen Jul 5, 2024
884c48e
Fix multiple bugs detected after merge from master
kdnilsen Jul 8, 2024
c076aa3
Merge branch 'openjdk:master' into master
kdnilsen Jul 8, 2024
6e851cf
Merge remote-tracking branch 'origin/master' into share-collector-res…
kdnilsen Jul 8, 2024
e69f9ac
Fix whitespace
kdnilsen Jul 8, 2024
c2dda1b
Fix budgeting error during freeset rebuild
kdnilsen Jul 8, 2024
9ea1056
Ignore generation soft capacities when adjusting generation sizes
kdnilsen Jul 11, 2024
c73e723
Verifier should only count non-trashed committed regions
kdnilsen Jul 13, 2024
e5c1b69
Turn off instrumentation
kdnilsen Jul 13, 2024
34704e4
Remove debug instrumentation and deprecated code
kdnilsen Jul 15, 2024
9e14826
Fix whitespace
kdnilsen Jul 15, 2024
33eacea
Remove unreferenced local variable
kdnilsen Jul 15, 2024
3e38c8a
Remove declaration of unused variables
kdnilsen Jul 16, 2024
d05885a
Use mixed evac rather than piggyback to describe old-gen evacuations
kdnilsen Jul 16, 2024
406d347
Simplify arguments by using instance variables in ShenandoahOldHeuris…
kdnilsen Jul 16, 2024
ee2ab01
Remove unreferenced variables
kdnilsen Jul 17, 2024
21a5d32
Improve comment
kdnilsen Jul 18, 2024
02ea566
Better comments as requested by code review
kdnilsen Jul 26, 2024
1058837
Simplify invocations of freeset rebuild when possible
kdnilsen Jul 26, 2024
7f01a7f
Remove incorrect and unnecessary comments
kdnilsen Jul 26, 2024
e009c35
Simplify code to rebuild free set after abbreviated and old GC
kdnilsen Jul 26, 2024
699f409
Cleanups requested by code review
kdnilsen Jul 27, 2024
ff99de7
Merge branch 'openjdk:master' into master
kdnilsen Aug 12, 2024
0e555e8
Merge remote-tracking branch 'origin/master' into share-collector-res…
kdnilsen Aug 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -170,12 +170,26 @@ void ShenandoahGenerationalHeuristics::choose_collection_set(ShenandoahCollectio
bool doing_promote_in_place = (humongous_regions_promoted + regular_regions_promoted_in_place > 0);
if (doing_promote_in_place || (preselected_candidates > 0) || (immediate_percent <= ShenandoahImmediateThreshold)) {
// Only young collections need to prime the collection set.

bool need_to_finalize_mixed = false;
if (_generation->is_young()) {
heap->old_generation()->heuristics()->prime_collection_set(collection_set);
need_to_finalize_mixed = heap->old_generation()->heuristics()->prime_collection_set(collection_set);
}

// Call the subclasses to add young-gen regions into the collection set.
choose_collection_set_from_regiondata(collection_set, candidates, cand_idx, immediate_garbage + free);

if (_generation->is_young()) {
// Especially when young-gen trigger is expedited in order to finish mixed evacuations, there may not be
// enough consolidated garbage to make effective use of young-gen evacuation reserve. If there is still
// young-gen reserve available following selection of the young-gen collection set, see if we can use
// this memory to expand the old-gen evacuation collection set.
need_to_finalize_mixed |=
heap->old_generation()->heuristics()->top_off_collection_set();
if (need_to_finalize_mixed) {
heap->old_generation()->heuristics()->finalize_mixed_evacs();
}
}
}

if (collection_set->has_old_regions()) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -81,96 +81,138 @@ void ShenandoahGlobalHeuristics::choose_global_collection_set(ShenandoahCollecti
size_t cur_young_garbage) const {
auto heap = ShenandoahGenerationalHeap::heap();
size_t region_size_bytes = ShenandoahHeapRegion::region_size_bytes();
size_t capacity = heap->young_generation()->max_capacity();
size_t young_capacity = heap->young_generation()->max_capacity();
size_t old_capacity = heap->old_generation()->max_capacity();
size_t garbage_threshold = region_size_bytes * ShenandoahGarbageThreshold / 100;
size_t ignore_threshold = region_size_bytes * ShenandoahIgnoreGarbageThreshold / 100;
const uint tenuring_threshold = heap->age_census()->tenuring_threshold();

size_t young_evac_reserve = heap->young_generation()->get_evacuation_reserve();
size_t old_evac_reserve = heap->old_generation()->get_evacuation_reserve();
size_t max_young_cset = (size_t) (young_evac_reserve / ShenandoahEvacWaste);
size_t young_cur_cset = 0;
size_t max_old_cset = (size_t) (old_evac_reserve / ShenandoahOldEvacWaste);
size_t old_cur_cset = 0;

// Figure out how many unaffiliated young regions are dedicated to mutator and to evacuator. Allow the young
// collector's unaffiliated regions to be transferred to old-gen if old-gen has more easily reclaimed garbage
// than young-gen. At the end of this cycle, any excess regions remaining in old-gen will be transferred back
// to young. Do not transfer the mutator's unaffiliated regions to old-gen. Those must remain available
// to the mutator as it needs to be able to consume this memory during concurrent GC.

size_t unaffiliated_young_regions = heap->young_generation()->free_unaffiliated_regions();
size_t unaffiliated_young_memory = unaffiliated_young_regions * region_size_bytes;

if (unaffiliated_young_memory > max_young_cset) {
size_t unaffiliated_mutator_memory = unaffiliated_young_memory - max_young_cset;
unaffiliated_young_memory -= unaffiliated_mutator_memory;
unaffiliated_young_regions = unaffiliated_young_memory / region_size_bytes; // round down
unaffiliated_young_memory = unaffiliated_young_regions * region_size_bytes;
size_t unaffiliated_old_regions = heap->old_generation()->free_unaffiliated_regions();
size_t unaffiliated_old_memory = unaffiliated_old_regions * region_size_bytes;

// Figure out how many unaffiliated regions are dedicated to Collector and OldCollector reserves. Let these
// be shuffled between young and old generations in order to expedite evacuation of whichever regions have the
// most garbage, regardless of whether these garbage-first regions reside in young or old generation.
// Excess reserves will be transferred back to the mutator after collection set has been chosen. At the end
// of evacuation, any reserves not consumed by evacuation will also be transferred to the mutator free set.
size_t shared_reserve_regions = 0;
if (young_evac_reserve > unaffiliated_young_memory) {
young_evac_reserve -= unaffiliated_young_memory;
shared_reserve_regions += unaffiliated_young_memory / region_size_bytes;
} else {
size_t delta_regions = young_evac_reserve / region_size_bytes;
shared_reserve_regions += delta_regions;
young_evac_reserve -= delta_regions * region_size_bytes;
}
if (old_evac_reserve > unaffiliated_old_memory) {
old_evac_reserve -= unaffiliated_old_memory;
shared_reserve_regions += unaffiliated_old_memory / region_size_bytes;
} else {
size_t delta_regions = old_evac_reserve / region_size_bytes;
shared_reserve_regions += delta_regions;
old_evac_reserve -= delta_regions * region_size_bytes;
}

// We'll affiliate these unaffiliated regions with either old or young, depending on need.
max_young_cset -= unaffiliated_young_memory;
size_t shared_reserves = shared_reserve_regions * region_size_bytes;
size_t committed_from_shared_reserves = 0;
size_t max_young_cset = (size_t) (young_evac_reserve / ShenandoahEvacWaste);
size_t young_cur_cset = 0;
size_t max_old_cset = (size_t) (old_evac_reserve / ShenandoahOldEvacWaste);
size_t old_cur_cset = 0;

// Keep track of how many regions we plan to transfer from young to old.
size_t regions_transferred_to_old = 0;
size_t promo_bytes = 0;
size_t old_evac_bytes = 0;
size_t young_evac_bytes = 0;

size_t free_target = (capacity * ShenandoahMinFreeThreshold) / 100 + max_young_cset;
size_t max_total_cset = (max_young_cset + max_old_cset +
(size_t) (shared_reserve_regions * region_size_bytes) / ShenandoahOldEvacWaste);
size_t free_target = ((young_capacity + old_capacity) * ShenandoahMinFreeThreshold) / 100 + max_total_cset;
size_t min_garbage = (free_target > actual_free) ? (free_target - actual_free) : 0;

log_info(gc, ergo)("Adaptive CSet Selection for GLOBAL. Max Young Evacuation: " SIZE_FORMAT
"%s, Max Old Evacuation: " SIZE_FORMAT "%s, Actual Free: " SIZE_FORMAT "%s.",
"%s, Max Old Evacuation: " SIZE_FORMAT "%s, Discretionary additional evacuation: " SIZE_FORMAT
"%s, Actual Free: " SIZE_FORMAT "%s.",
byte_size_in_proper_unit(max_young_cset), proper_unit_for_byte_size(max_young_cset),
byte_size_in_proper_unit(max_old_cset), proper_unit_for_byte_size(max_old_cset),
byte_size_in_proper_unit(shared_reserves), proper_unit_for_byte_size(shared_reserves),
byte_size_in_proper_unit(actual_free), proper_unit_for_byte_size(actual_free));

size_t cur_garbage = cur_young_garbage;
for (size_t idx = 0; idx < size; idx++) {
ShenandoahHeapRegion* r = data[idx]._region;
if (cset->is_preselected(r->index())) {
fatal("There should be no preselected regions during GLOBAL GC");
continue;
}
bool add_region = false;
size_t region_garbage = r->garbage();
size_t new_garbage = cur_garbage + region_garbage;
bool add_regardless = (region_garbage > ignore_threshold) && (new_garbage < min_garbage);
if (r->is_old() || (r->age() >= tenuring_threshold)) {
size_t new_cset = old_cur_cset + r->get_live_data_bytes();
if ((r->garbage() > garbage_threshold)) {
while ((new_cset > max_old_cset) && (unaffiliated_young_regions > 0)) {
unaffiliated_young_regions--;
regions_transferred_to_old++;
if (add_regardless || (region_garbage > garbage_threshold)) {
size_t live_bytes = r->get_live_data_bytes();
size_t new_cset = old_cur_cset + r->get_live_data_bytes();
// May need multiple reserve regions to evacuate a single region, depending on live data bytes and ShenandoahOldEvacWaste
size_t orig_max_old_cset = max_old_cset;
size_t proposed_old_region_consumption = 0;
while ((new_cset > max_old_cset) && (committed_from_shared_reserves < shared_reserves)) {
committed_from_shared_reserves += region_size_bytes;
proposed_old_region_consumption++;
max_old_cset += region_size_bytes / ShenandoahOldEvacWaste;
}
}
if ((new_cset <= max_old_cset) && (r->garbage() > garbage_threshold)) {
add_region = true;
old_cur_cset = new_cset;
// We already know: add_regardless || region_garbage > garbage_threshold
if (new_cset <= max_old_cset) {
add_region = true;
old_cur_cset = new_cset;
cur_garbage = new_garbage;
if (r->is_old()) {
old_evac_bytes += live_bytes;
} else {
promo_bytes += live_bytes;
}
} else {
// We failed to sufficiently expand old, so unwind proposed expansion
max_old_cset = orig_max_old_cset;
committed_from_shared_reserves -= proposed_old_region_consumption * region_size_bytes;
}
}
} else {
assert(r->is_young() && (r->age() < tenuring_threshold), "DeMorgan's law (assuming r->is_affiliated)");
size_t new_cset = young_cur_cset + r->get_live_data_bytes();
size_t region_garbage = r->garbage();
size_t new_garbage = cur_young_garbage + region_garbage;
bool add_regardless = (region_garbage > ignore_threshold) && (new_garbage < min_garbage);

if (add_regardless || (r->garbage() > garbage_threshold)) {
while ((new_cset > max_young_cset) && (unaffiliated_young_regions > 0)) {
unaffiliated_young_regions--;
if (add_regardless || (region_garbage > garbage_threshold)) {
size_t live_bytes = r->get_live_data_bytes();
size_t new_cset = young_cur_cset + live_bytes;
// May need multiple reserve regions to evacuate a single region, depending on live data bytes and ShenandoahEvacWaste
size_t orig_max_young_cset = max_young_cset;
size_t proposed_young_region_consumption = 0;
while ((new_cset > max_young_cset) && (committed_from_shared_reserves < shared_reserves)) {
committed_from_shared_reserves += region_size_bytes;
proposed_young_region_consumption++;
max_young_cset += region_size_bytes / ShenandoahEvacWaste;
}
}
if ((new_cset <= max_young_cset) && (add_regardless || (region_garbage > garbage_threshold))) {
add_region = true;
young_cur_cset = new_cset;
cur_young_garbage = new_garbage;
// We already know: add_regardless || region_garbage > garbage_threshold
if (new_cset <= max_young_cset) {
add_region = true;
young_cur_cset = new_cset;
cur_garbage = new_garbage;
young_evac_bytes += live_bytes;
} else {
// We failed to sufficiently expand young, so unwind proposed expansion
max_young_cset = orig_max_young_cset;
committed_from_shared_reserves -= proposed_young_region_consumption * region_size_bytes;
}
}
}
if (add_region) {
cset->add_region(r);
}
}

if (regions_transferred_to_old > 0) {
heap->generation_sizer()->force_transfer_to_old(regions_transferred_to_old);
heap->young_generation()->set_evacuation_reserve(young_evac_reserve - regions_transferred_to_old * region_size_bytes);
heap->old_generation()->set_evacuation_reserve(old_evac_reserve + regions_transferred_to_old * region_size_bytes);
}
heap->young_generation()->set_evacuation_reserve((size_t) (young_evac_bytes * ShenandoahEvacWaste));
heap->old_generation()->set_evacuation_reserve((size_t) (old_evac_bytes * ShenandoahOldEvacWaste));
heap->old_generation()->set_promoted_reserve((size_t) (promo_bytes * ShenandoahPromoEvacWaste));
}
Loading