Skip to content

Commit bb0fa18

Browse files
committed
Documents
1 parent eb65806 commit bb0fa18

File tree

2 files changed

+222
-0
lines changed

2 files changed

+222
-0
lines changed

ReadyToInterpret.md

+151
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
# Investigation on making interpreter work with ReadyToRun
2+
3+
## Status
4+
5+
This document is preliminary - it only covers the most basic case - it doesn't even cover very often used case (i.e. virtual method calls).
6+
7+
Imagine I am doing a Hackathon overnight trying to get something working, not designing something for the long term, yet.
8+
9+
## Goals
10+
11+
- Figure out how relevant parts of ready to run works.
12+
- Figure out how to hack it so that we can get into the CoreCLR interpreter.
13+
14+
## Non Goals
15+
16+
- Deliver a working prototype (I just don't have the time - and the CoreCLR interpreter is not the right target)
17+
- Come up with an optimal design (Same, I just don't have the time)
18+
19+
## High-level observations
20+
21+
We already have a mechanism to call an arbitrary managed method from the native runtime - this mechanism can be used to call ReadyToRun compiled method. So in general, interpreter -> ReadyToRun is not an issue.
22+
23+
The key challenge is to get ReadyToRun code to call into the interpreter.
24+
25+
## Understanding what happened when we are about to make an outgoing call from ReadyToRun
26+
27+
When ReadyToRun code makes a call to a static function, it
28+
29+
- push the arguments on the register/stack as per the calling convention
30+
- call into a redirection cell
31+
- get into the runtime.
32+
33+
Inside the runtime, I will eventually get to `ExternalMethodFixupWorker` defined in `prestub.cpp`.
34+
35+
At this point, I have
36+
- transitionBlock - no idea what it is
37+
- pIndirection - the address for storing the callee address
38+
- sectionIndex - a number, pushed by the thunk, and
39+
- pModule - a pointer to the module containing the call instruction
40+
41+
Since the call comes from a ReadyToRun image, `pModule` must have a ready to run image
42+
43+
We can easily calculate the RVA of the `pIndirection`
44+
45+
If the call provided the `sectionIndex`, we will just use it, otherwise we can still calculate the section index based on the RVA.
46+
47+
The calculation is simply by sequentially scanning the import sections, each section is self describing its address range so we can check
48+
49+
The import section has an array signature - using the rva - beginning rva of the section. we can index into the signature array to find the signature.
50+
51+
The signature is then parsed to become a `MethodDesc` - where the method preparation continues as usual
52+
53+
Last but not least, eventually, the `pIndirection` will be patched with that entry point, and the call proceed by using the arguments already on the stack/restored registers.
54+
55+
## How the potential hack looks like
56+
57+
We keep everything the same up to the method preparation part.
58+
59+
We knew it is possible to produce an `InterpreterMethodInfo` given a `MethodDesc` when the system is ready to JIT, so we should be able to produce the `InterpreterMethodInfo` there.
60+
61+
The arguments are already on the registers, but we can't dynamically generate the `InterpreterStub`, the only reasonable thing is to pre-generate the stubs in the ReadyToRun image itself.
62+
63+
> A stub per signature is necessary because each signature need a different way to populate the arguments (and the interpreter method info). On the other hand, a stub per signature is sufficient because if we knew how to prepare the register to begin with, we must know exactly what steps are needed to put them into a format the `InterpretMethodBody` likes. As people points out, this is going to be a large volume, this is by no means optimal.
64+
65+
The stub generation code can 'mostly' be exactly the same as `GenerateInterpreterStub` with two twists:
66+
67+
- We need to use indirection to get to the `InterpreterMethodInfo` object. That involves having a slot that the `InterpreterMethodInfo` construction process need to patch.
68+
- What if the call signature involves unknown struct size (e.g. a method in A.dll take a struct in B.dll where B.dll is considered not in the same version bubble)
69+
70+
Next, we need the data structure that get us to the address of the stub as well as the address of the cell storing the `InterpreterMethodInfo`. What we have is `pIndirection` and therefore `MethodDesc`.
71+
72+
To do that, we might want to mimic how the runtime locate ReadyToRun code.
73+
74+
Here is a stack of how the ready to run code discovery look like:
75+
76+
```
77+
coreclr!ReadyToRunInfo::GetEntryPoint+0x238 [C:\dev\runtime\src\coreclr\vm\readytoruninfo.cpp @ 1148]
78+
coreclr!MethodDesc::GetPrecompiledR2RCode+0x24e [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 507]
79+
coreclr!MethodDesc::GetPrecompiledCode+0x30 [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 443]
80+
coreclr!MethodDesc::PrepareILBasedCode+0x5e6 [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 412]
81+
coreclr!MethodDesc::PrepareCode+0x20f [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 319]
82+
coreclr!CodeVersionManager::PublishVersionableCodeIfNecessary+0x5a1 [C:\dev\runtime\src\coreclr\vm\codeversion.cpp @ 1739]
83+
coreclr!MethodDesc::DoPrestub+0x72d [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 2869]
84+
coreclr!PreStubWorker+0x46d [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 2698]
85+
coreclr!ThePreStub+0x55 [C:\dev\runtime\src\coreclr\vm\amd64\ThePreStubAMD64.asm @ 21]
86+
coreclr!CallDescrWorkerInternal+0x83 [C:\dev\runtime\src\coreclr\vm\amd64\CallDescrWorkerAMD64.asm @ 74]
87+
coreclr!CallDescrWorkerWithHandler+0x12b [C:\dev\runtime\src\coreclr\vm\callhelpers.cpp @ 66]
88+
coreclr!MethodDescCallSite::CallTargetWorker+0xb79 [C:\dev\runtime\src\coreclr\vm\callhelpers.cpp @ 595]
89+
coreclr!MethodDescCallSite::Call+0x24 [C:\dev\runtime\src\coreclr\vm\callhelpers.h @ 465]
90+
```
91+
92+
The interesting part, of course, is how `GetEntryPoint` works. Turn out it is just a `NativeHashtable` lookup given a `VersionResilientMethodHashCode`, so we should be able to encode the same hash table for the stubs as well.
93+
94+
Note that `GetEntryPoint` has the fixup concept, maybe we can use the same concept to patch the slot for `InterpreterMethodInfo`.
95+
96+
## How to implement the potential hack
97+
98+
From the compiler side:
99+
100+
### When do we need to generate the stubs?
101+
When the ReadyToRun compiler generate a call, the JIT will call back into crossgen2 to create a slot for it. At that point, we should know what we need to make sure a stub is available for it by working with the dependency tracking engine.
102+
103+
### Actually generate the stubs
104+
105+
To stub generation should mostly work the same as in `GenerateInterpreterStub` today with a couple twists
106+
- We don't need to generate the `InterpreterMethodInfo`, that work is left until runtime.
107+
- If the stub involve types with unknown size, we need to generate the right stub code for it (e.g. A.dll call a function that involves a struct defined in `B.dll` where they are not in the same version bubble)
108+
- The stub needs an instance of `InterpreterMethodInfo`, it cannot be hardcoded, the pointer of it must be read from somewhere else.
109+
- Whenever we generate the stub, we need to store it somewhere so that we can follow the logic as in `MethodEntryPointTableNode`
110+
111+
From the runtime side:
112+
113+
### Locating the stub
114+
- When we reach `ExternalMethodFixupWorker`, we need to use the table to get back to the generated stubs
115+
116+
### Preparing the data
117+
- We need to create the `InterpreterMethodInfo` and make sure the stub code will be able to read it.
118+
119+
## Alternative designs
120+
Following the thought on the earlier prototype for tagged pointers, we could envision a solution that ditch all those stubs, e.g.
121+
122+
1. Changing the call convention for every method so that it is the same as what the interpreter method likes.
123+
124+
Pros:
125+
- Consistency, easily to understand
126+
- No need for stubs, efficient for interpreter calls
127+
128+
Cons:
129+
- Lots of work to have a different calling convention
130+
- Inefficient for non interpreter calls
131+
132+
2. Changing the call site so that it detects tagged pointers and call differently
133+
134+
Pros:
135+
- Similar with what we have in the tagged pointer prototype
136+
- No need for stubs, efficient for interpreter calls
137+
138+
Cons:
139+
- Every call involves dual call code
140+
141+
3. The approach described in this document (i.e. using stubs)
142+
143+
Pros:
144+
- Probably cheapest to implement
145+
146+
Cons:
147+
- Lots of stubs
148+
- Inefficient for interpreter call (involve stack rewriting)
149+
- Unclear how it could work with virtual or interface calls
150+
151+
I haven't put more thoughts into these alternative solutions, but I am aware they exists.

TaggedFunctionPrototype.md

+71
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
# Tagged Function Prototype
2+
3+
The document is written to describe my prototype available [here](https://cloudbuild.microsoft.com/build?id=7a730686-9d69-fe04-56a7-2118a28196ea&bq=devdiv_DevDiv_DotNetFramework_QuickBuildNoDrops_AutoGen) as a PR. I am not planning to merge it.
4+
5+
The goal of this prototype is to investigate whether or not the tagged function concept is practically feasible in the CoreCLR code base.
6+
7+
## How does the CoreCLR interpreter work today?
8+
9+
This section covers a small portion of how the interpreter integrate with the runtime. It does NOT attempt to explain the full interpreter execution process.
10+
11+
The interpreter work by pretending itself as jitted code, as such, it needs to
12+
13+
1. Convert the incoming arguments from the register/stack to something C++ understands
14+
2. The control flows to `InterpretMethodBody`, where it interprets the byte code.
15+
3. Call any other callee as if they are jitted code as well, and
16+
4. Put thing back on the stack as if it were produced by jitted code.
17+
18+
Step 1 is something require special generated code to do, right now, it is done by `GenerateInterpreterStub`. It is meant to be a tiny routine that take arguments from the stack
19+
and rewrite the stack so that the values can be consumed by C++.
20+
21+
## What do we want?
22+
23+
We want to get rid of the concept of interpreter stub, and instead, have the caller calling the actual `InterpretMethodBody` directly.
24+
25+
`InterpretMethodBody` requires an `InterpreterMethodInfo` object, which basically is a representation where we can easily access its signature and its byte code.
26+
27+
So the problem is reduced to:
28+
29+
1. Identify a caller that is currently calling using the standard calling convention.
30+
2. Get that caller to access an `InterpreterMethodInfo` object, and so
31+
3. Make it calls `InterpretMethodBody` instead.
32+
33+
## Wrong attempts
34+
35+
I tried 3 different approaches to that and only the last one succeed. These wrong attempts are documented just so we don't try the same wrong idea again.
36+
37+
### Idea 1
38+
39+
- Make `GenerateInterpreterStub` return a tagged pointer instead
40+
41+
This approach failed because `GenerateInterpreterStub` is called as part of `ThePreStub`. `ThePreStub` works by leaving the call arguments on the stack, so the incoming call arguments are already on the stack, and we at least need some code to get it back.
42+
43+
### Idea 2
44+
45+
Now we know we must perform call `InterpretMethodBody` earlier then `ThePreStub`, which means `ThePreStub` must be replaced by something else. In fact, how does `ThePreStub` knows what `MethodDesc` to interpret? Upon investigation, I learn about this concept of `Precode`.
46+
47+
Basically, every method has a `Precode`, that is a simple `jmp` instruction the goes somewhere else. This is the first instruction that get executed. To begin with, that instruction jumps to `ThePreStub`, and that instruction is code generated. Given the precode, we can get to the MethodDesc.
48+
49+
What that means is that we need to get rid of the code generation during the Precode generation, which means will no longer have the jmp instruction. Instead, we will put a thing there that allow us to get to the `InterpreterMethodInfo`.
50+
51+
A reasonable choice is to put a pointer to the `InterpreterMethodInfo` object right there. We will tag the least significant bit of it so that we know it is not a normal function entry point.
52+
53+
To be more concrete, the precode is generated during `MethodDesc::EnsureTemporaryEntryPointCore`. We will modify that code so that it translate the `MethodDesc` into an `InterpreterMethodInfo` there and tag it so that we put it into the method table there.
54+
55+
The reason why this approach fails is more subtle. It turns out that the `InterpreterMethodInfo` construction process leveraged the code that supports the JIT to extract the IL, and that code assumed the method tables are also properly constructed, but that's not true at the time `MethodDesc::EnsureTemporaryEntryPointCore` is called. So we must delay the process of `InterpreterMethodInfo` object construction.
56+
57+
## Working approach
58+
59+
### Idea 3
60+
61+
To get around the cyclic dependency issue above, I tagged the MethodDesc pointer instead. By the time we are about to call the function, then we construct the `InterpreterMethodInfo`. This worked.
62+
63+
The down side of this approach, obviously, is that the pointer in the method table is no longer a valid entry point, so anything else that try to call it will lead to an access violation. This will work in a pure interpreted scenario, where the interpreter is the only thing that runs in the process.
64+
65+
Suppose we also want to let (e.g. ReadyToRun) code to run, that won't work unless we also change the ReadyToRun callers.
66+
67+
The code in this branch demonstrated this concept. It will execute some code under the interpreter (and fail pretty quickly because I haven't implemented everything yet).
68+
69+
### Lowlights
70+
71+
This code is still using dynamic code generation for a couple of things. We are still generating code for GC write barrier, and we are still generating some glue code for pinvoke. Lastly, the call made by the interpreter is not converted to use the new call convention yet. These seems to be solvable problems.

0 commit comments

Comments
 (0)