GH-115802: JIT using the "medium" code model on x86_64-unknown-linux-gnu
#130097
+7
−17
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a perfect middle-ground between the "large" code model (which we used to use) and the "small" code model (which we currently use):
OPARG
, is encoded directly in the instruction stream (currently they're loaded indirectly).&_PyEval_BinaryOps
, is encoded directly in the instruction stream (currently they're loaded indirectly)._JIT_ERROR_TARGET
, use 32-bit jumps (currently they use "relaxable" 64-bit indirect jumps)._Py_Dealloc
, use "relaxable" 64-bit indirect jumps (same as today).This only works on one platform, but it's an important one. Looks to be 0.5%-1% faster on benchmarks, as well as a very slight (~0.15%) memory savings due to having to JIT less auxiliary data for storing addresses.
Here's the before-and-after of
_LOAD_SMALL_INT
: