-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openhcl: refactor shared pool to general purpose page pool #260
base: main
Are you sure you want to change the base?
Conversation
openhcl/page_pool_alloc/src/lib.rs
Outdated
@@ -47,27 +48,38 @@ enum State { | |||
pfn_bias: u64, | |||
#[inspect(hex)] | |||
size_pages: u64, | |||
device_id: u64, | |||
device_name: String, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is quite expensive. Can you keep an index back into some table instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also use Arc<str>
, but that's still expensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah i'm refactoring this given our discussion on the IDs - I'll probably index into a string table.
openhcl/page_pool_alloc/src/lib.rs
Outdated
@@ -47,27 +48,38 @@ enum State { | |||
pfn_bias: u64, | |||
#[inspect(hex)] | |||
size_pages: u64, | |||
device_id: u64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed offline, we don't want a separate ID (unless it's a temporary index into a table).
openhcl/underhill_core/src/worker.rs
Outdated
@@ -1736,7 +1744,7 @@ async fn new_underhill_vm( | |||
let manager = NvmeManager::new( | |||
&driver_source, | |||
processor_topology.vp_count(), | |||
vfio_dma_buffer(&shared_vis_pages_pool), | |||
vfio_dma_buffer(&shared_vis_pages_pool, "nvme_manager".into()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe squeeze fixing this into this change? As you mention in the PR description, we need to push this down to be per device.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, let me see if i can do better here too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -155,7 +155,7 @@ impl UhProcessor<'_, TdxBacked> { | |||
target_vtl: GuestVtl, | |||
flush_addrs: &[HvGvaRange], | |||
runner: &mut ProcessorRunner<'_, Tdx>, | |||
flush_page: &shared_pool_alloc::SharedPoolHandle, | |||
flush_page: &page_pool_alloc::PagePoolHandle, | |||
) { | |||
// Now we can build the TDX structs and actually call INVGLA. | |||
tracing::trace!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any chance you could uncomment and fixup the commented code in the else branch below too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet - these require private pages right? There's a follow up change to actually support private VTL2 pools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does require private pages, but the code is also still turned off, so it won't actually do anything. Either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets take it as a follow up.
be6a2d7
to
3259bdc
Compare
131b81d
to
149ae0b
Compare
@@ -21,7 +21,7 @@ dependencies = [ | |||
"inspect", | |||
"pal_async", | |||
"parking_lot", | |||
"thiserror 2.0.0", | |||
"thiserror 2.0.3", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't know why this got revved, but it should probably be rolled back.
Refactor the shared visibility pool to a more general purpose page pool. This is in preparation for additional changes to support save restore, and private allocations.
Add a new
new_private_pool
method to support future private memory page pools.Add additional tracking of allocations with device_ids and device names, which will be used for save restore. This can also help track current allocations. Today, all nvme allocations come via nvme_manager, it would be good to add more granularity to track this per nvme device, like we do for mana nics.