Skip to content

Fix shrink_in_place #92

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Nov 9, 2018
Merged

Fix shrink_in_place #92

merged 3 commits into from
Nov 9, 2018

Conversation

gnzlbg
Copy link
Owner

@gnzlbg gnzlbg commented Nov 6, 2018

The current shrink_in_place implementation is incorrect. It returns Ok if the usable_size returned by xmallocx is >= new_size.

However, xmallocx returns the usable_size of the original allocation, which is >= new_size, if:

  • the allocation cannot be shrunk in place, e.g., because new_size lands in a different size-class - we must return error here since new_size cannot be used to deallocate the allocation but Alloc::shrink_in_place users are allowed to do that on success
  • new_size is in the same size-class as the original allocation - in which the allocation was shrunk appropriately by zero, and new_size can be used to deallocate the allocation since it lands in the same size-class

@pnkfelix
Copy link

pnkfelix commented Nov 6, 2018

cc #64

@gnzlbg gnzlbg changed the title Add a test for shrink_in_place Fix shrink_in_place Nov 6, 2018
@davidtgoldblatt
Copy link

+cc @interwq for the xmallocx question.

@interwq
Copy link

interwq commented Nov 8, 2018

The xmallocx description looks right to me. Any questions or comments on the behavior / performance there? The only thought I have right now is, we can probably do a much better job at shrinking large allocations (across size classes, but stays large), e.g. shrink from 128k to 64k; does it seem like a common workload for you @gnzlbg?. Small size classes I don't think much can be done really.

@pnkfelix
Copy link

pnkfelix commented Nov 8, 2018

@interwq since you asked if there are any questions about the behavior of xmallocx: the new logic being proposed here returns Ok(()) if the returned usable_size is any value less than layout.size().

The assumption that I believe the client is going to make is that if they get Ok(()) back, they can subsequently dealloc the resized block passing its size as new_size

So, my question: is there any potential scenario where:

  1. client requests new_size
  2. xallocx returns usable_size, where new_size < usable_size && usable_size < layout.size(), and
  3. it is not sound to use new_size as the associated size of the block when calling dealloc, and one must instead use usable_size to get correct deallocation behavior?

@interwq
Copy link

interwq commented Nov 9, 2018

@pnkfelix : in the scenario you had, if new_size is not in the same size class as usable_size, then it won't be correct to use it for deallocation. The diff looks right to me, as it queries nallocx before returning ok.

@pnkfelix
Copy link

pnkfelix commented Nov 9, 2018

I don’t think it calls nallocx for the Ok(()) on line 204

@gnzlbg it sounds to me like you might need to fix that?

@pnkfelix
Copy link

pnkfelix commented Nov 9, 2018

(Maybe the easiest fix is to check usable_size == new_size there instead of usable_size < layout.size())

@gnzlbg
Copy link
Owner Author

gnzlbg commented Nov 9, 2018

@interwq Given:

  • a pointer ptr to an allocation of size old_size,
  • a request to shrink the allocation in place to new_size such that new_size < old_size && nallocx(new_size) < nallocx(old_size),

we call some_usable_size = xmallocx(ptr, new_size, 0, flags).

Iff some_usable_size < old_size:

  • is some_usable_size < nallocx(old_size) ? I think so, this means the allocation was shrunk.
  • is some_usable_size == nallocx(new_size) ? That is, if xmallocx returns a usable size that is smaller than that of the old allocation, can we interpret that as a success in shrinking the allocation to the size class of new_size such that the allocation can be deallocated via sized-deallocation by passing new_size?

If the answer to both these questions is yes, the current diff should be correct @pnkfelix .

If the answer to the first question is yes and the second question is no, then in the branch @pnkfelix mentioned (the first one where we currently check some_usable_size < old_size) we would also have to check that some_usable_size == nallocx(new_size).

If the answer to both questions is no, then the whole diff is probably wrong. Since we are assuming that some_usable_size < old_size implies some_usable_size < nallocx(old_size) to detect that the allocation was not shrunk because new size happens to be in the same size class as old size.

@interwq
Copy link

interwq commented Nov 9, 2018

Iff some_usable_size < old_size:

is some_usable_size < nallocx(old_size) ? I think so, this means the allocation was shrunk.

Yes.

is some_usable_size == nallocx(new_size) ? That is, if xmallocx returns a usable size that is smaller than that of the old allocation, can we interpret that as a success in shrinking the allocation to the size class of new_size such that the allocation can be deallocated via sized-deallocation by passing new_size?

Yes it will shrink to new_size, or not shrink at all. The manual doesn't really "enforce" this, but the current implementation works this way and I don't see us changing this. You can see the implementation detail here: https://github.com/jemalloc/jemalloc/blob/5e23f96dd4e4ff2847a85d44a01b66e4ed2da21f/src/large.c#L96

where we split the existing extent into two; to cut off the excess pages.

@gnzlbg
Copy link
Owner Author

gnzlbg commented Nov 9, 2018

@interwq thank you for chiming in. @pnkfelix I think we can then land this as is ?

@pnkfelix
Copy link

pnkfelix commented Nov 9, 2018

I certainly won’t block it landing. @alexcrichton is the one with review privileges here. 🙂

@gnzlbg gnzlbg merged commit 08c2168 into gnzlbg:master Nov 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants