-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pin Vtl2 memory if its being carve out by itself #169
base: main
Are you sure you want to change the base?
Pin Vtl2 memory if its being carve out by itself #169
Conversation
use hvdef::hypercall::HvInputVtl; | ||
use hvdef::HV_PAGE_SIZE; | ||
use memory_range::MemoryRange; | ||
use minimal_rt::arch::hypercall::invoke_hypercall; | ||
use zerocopy::AsBytes; | ||
|
||
const PIN_REQUEST_HEADER_SIZE: usize = size_of::<hvdef::hypercall::PinUnpinGpaPageRangesHeader>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this into the relevant function
let mut ranges = ArrayVec::new(); | ||
|
||
// Calculate the total number of pages in the memory range | ||
let total_pages = (memory_range.end() - memory_range.start()).div_ceil(PAGE_SIZE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.page_count_4k()
let total_pages = (memory_range.end() - memory_range.start()).div_ceil(PAGE_SIZE); | ||
|
||
// Iterate over the memory range in chunks of 2048 pages | ||
let mut current_page = memory_range.start() >> 12; // Convert start address to page number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.start_4k_gpn()
ranges.push(HvGpaRange( | ||
HvGpaRangeExtended::new() | ||
.with_additional_pages(pages_in_this_range - 1) | ||
.with_large_page(false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use the large pages when possible?
let mut current_page = memory_range.start() >> 12; // Convert start address to page number | ||
let mut remaining_pages = total_pages; | ||
|
||
while remaining_pages > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend doing something like while current_page < memory_range.end_4k_gpn()
and removing remaining_pages
as a separate variable. IMO it's almost always better to have a single loop variable.
let header = hvdef::hypercall::PinUnpinGpaPageRangesHeader { reserved: 0 }; | ||
let input_offset = size_of::<hvdef::hypercall::PinUnpinGpaPageRangesHeader>(); | ||
|
||
while remaining_size_bytes > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto here.
|
||
while remaining_size_bytes > 0 { | ||
// Determine the size for this chunk of memory | ||
let chunk_bytes = cmp::min(remaining_size_bytes, max_bytes_per_request); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd just write let chunk_end = (current_start + max_bytes_per_request).min(memory_range.end())
Pin VTL2 memory if it is being carved out independently. In the current implementation, specifically in add-on mode, the host ensures that VTL2 memory is physically backed. This change enhances that mechanism by explicitly pinning the VTL2 memory when it is carved out.