-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OPENCL][TEXTURE] Improved texture memory planning #17571
base: main
Are you sure you want to change the base?
Conversation
Except the relay backend changes everything is reusable for Texture support in Relax. |
Motivated form the fact that textures can be allocated over a clBuffer object and the size of backing clBuffer can be computed based on hardware image pitch alignment. This optimizes the overall memory allocation on device and helps greately the models with large memory requirements. Improvised the graph memory planner to not differentiate buffer and texture storage tokens and reuse them across. The texture pool in OpenCL runtime is rebranded as memory pool that handles allocation for both buffer and image objects. NDArray to DeviceAPI interface is extended with AllocDataSpaceView and FreeDataSpaceView. These new API's acommodates accessing same physical memory as clBuffer / clImage objects.
99988f3
to
36e9ea7
Compare
36e9ea7
to
56d0739
Compare
@tvm-bot rerun |
1 similar comment
@tvm-bot rerun |
* \param mem_scope The memory scope of allocated tensor. | ||
* \return The allocated device pointer. | ||
*/ | ||
virtual void* AllocDataSpaceView(Device dev, void* data, ShapeTuple shape, DLDataType dtype, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given this is specific to OpenCL, let us still strive to keep it within opencl allocator interface instead of going through the DeviceAPI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this is the clean way to keep minimal changes in other modules (graph runtime, ndarray, memory manager ..etc).
In Relax also , I am mapping alloc_storage to allocate cl_buffer and alloc_tensor does a view over it by this device API call. (WIP Ref. srkreddy1238@a6376b9#diff-847ee73fb0b77db96cce920da6cbae223f6bdb026ea125514122e96630356c9b)
Later, this also allows easy path for CLML memory management going through TVM memory_manager interface and also features like GMEM (on chip memory of AdrenoGPU) support for TVM ..etc.
Let me know if you have different advice, I can explore the possibilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be great to start with thinking along the direction of special allocator https://github.com/apache/tvm/blob/main/include/tvm/runtime/memory/memory_manager.h
My reading is that seems the main issue lies in the need to get Tensor from existing Buffer in an customized fashion, perhaps we can extend Allocator interface to enable such view
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My reading is that seems the main issue lies in the need to get Tensor from existing Buffer in an customized fashion, perhaps we can extend Allocator interface to enable such view
True, where the backing buffer is used as it as or many image views created over based on memory plan.
With view over NDArray or special Allocator we need to reach opencl device api for final view creation which happen by OpenCL call clCreateImage
. We create a new cl_mem (image) from an existing cl_mem(buffer) as backing buffer.
Current flow is
storage_pool populated by: Allocator->Empty => NDArray.
Data_entry_ populated by: NDArray => NDArray::CreateView => DeviceAPI::AllocDataSpaceView => NDArray
We can change this to Allocator interface by
Special Allocator (Extended from Allocator with new call for View) registered from OpenCL Device API at Init.
storage_pool by : Allocator->Alloc => StorageObj
data_entry_ : StorageObj => AllocNDArrayWithScope => Allocator::CreateView (access OpenCLWorkSpace and create view) => NDArray
Is my understanding correct here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of VM (Relax)
alloc_storage : Allocator->Alloc (always scope "global") => DeviceAPI::AllocDataSpace =>StorageObj
alloc_tensor : StorageObj::AllocNDArrayScoped => DeviceAPI::AllocDataSpaceView => NDArray
Ref. AllocNDArrayScoped with destructor that calls FreeDataSpaceView for clean up.
srkreddy1238@a6376b9#diff-847ee73fb0b77db96cce920da6cbae223f6bdb026ea125514122e96630356c9b
Motivated form the fact that textures can be allocated over a clBuffer object and the size of backing clBuffer can be computed based on hardware image pitch alignment.
This optimizes the overall memory allocation on device and helps greately the models with large memory requirements.
Improvised the graph memory planner to not differentiate buffer and texture storage tokens and reuse them across. The texture pool in OpenCL runtime is rebranded as memory pool that handles allocation for both buffer and image objects.
NDArray to DeviceAPI interface is extended with AllocDataSpaceView and FreeDataSpaceView. These new API's acommodates accessing same physical memory as clBuffer / clImage objects.