-
Notifications
You must be signed in to change notification settings - Fork 696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: remove ttl #5461
runtime: remove ttl #5461
Conversation
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Andrei Sandu <[email protected]>
@alindima https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7591514 was started for your command Comment |
@alindima Command |
bot bench polkadot-pallet --pallet=polkadot_runtime_parachains::paras_inherent --runtime=westend -v UPSTREAM_MERGE=n |
@alindima https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7595155 was started for your command Comment |
…=westend --target_dir=polkadot --pallet=polkadot_runtime_parachains::paras_inherent
@alindima Command |
bot bench polkadot-pallet --pallet=polkadot_runtime_parachains::paras_inherent --runtime=rococo -v UPSTREAM_MERGE=n |
@alindima https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7607482 was started for your command Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
@@ -514,6 +486,14 @@ impl<T: Config> Pallet<T> { | |||
T::MessageQueue::sweep_queue(AggregateMessageOrigin::Ump(UmpQueueId::Para(para))); | |||
} | |||
|
|||
pub(crate) fn get_occupied_cores( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Who to ask for occupied cores ... inclusion! ;-)
@@ -1173,7 +1151,7 @@ impl<T: Config> Pallet<T> { | |||
|
|||
/// Returns the first `CommittedCandidateReceipt` pending availability for the para provided, if | |||
/// any. | |||
pub(crate) fn candidate_pending_availability( | |||
pub(crate) fn first_candidate_pending_availability( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: What does "first" mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added an explanation in the doc block
|
||
Self::process_inherent_data(data, ProcessInherentDataContext::Enter) | ||
.map(|(_processed, post_info)| post_info) | ||
Self::process_inherent_data(data).and_then(|(processed, post_info)| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! :-)
Future optimization possibilities, if we ever needed to:
- Hashing instead of full cloning.
- Counting bytes/vec sizes and only compare total size in the end.
let mut eligible: BTreeMap<ParaId, BTreeSet<CoreIndex>> = BTreeMap::new(); | ||
let mut total_eligible_cores = 0; | ||
|
||
for (core_idx, para_id) in Self::eligible_paras(&occupied_cores) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I follow the reasoning.
pub(crate) fn eligible_paras<'a>( | ||
occupied_cores: &'a BTreeSet<CoreIndex>, | ||
) -> impl Iterator<Item = (CoreIndex, ParaId)> + 'a { | ||
scheduler::ClaimQueue::<T>::get().into_iter().filter_map(|(core_idx, queue)| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for this PR, but for the future: Let's aim to have a proper API for modules. Accessing another pallet's storage directly is code smell: We apparently don't have clear responsibilities nor an API that is sufficient to communicate with that component.
@@ -133,6 +127,13 @@ mod enter { | |||
} | |||
} | |||
|
|||
// Clear the on-chain dispute data. | |||
fn clear_dispute_storage<Test: disputes::Config>() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be a cfg-test only clear_dispute_storage()
function in the disputes module (avoiding making storage public).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -826,6 +919,8 @@ mod enter { | |||
// over weight | |||
assert_eq!(limit_inherent_data.backed_candidates.len(), 0); | |||
|
|||
clear_dispute_storage::<Test>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be cleaner to just create a new test environment before the enter call?
CoreOccupied::Free => { | ||
if let Some(para_id) = scheduled.get(&CoreIndex(i as _)).cloned() { | ||
} else { | ||
if let Some(assignment) = scheduler::Pallet::<T>::peek_claim_queue(&core_idx) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Fetching the entire claim queue on each iteration is rather inefficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's were the iterator idea came from. With this we could just zip the two iterators and loop through both just once.
Anyhow, this call is actually obsolete and only exists for backwards compatibility? If so, we can just not bother and also mark the runtime api as deprecated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not realy obsolete, as there's no other runtime API we can use to find out the state of a core (whether it's occupied or not). And this is still needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Fetching the entire claim queue on each iteration is rather inefficient.
True, I can change it to only query once
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -1710,7 +1710,8 @@ pub mod migrations { | |||
// permanent | |||
pallet_xcm::migration::MigrateToLatestXcmVersion<Runtime>, | |||
parachains_inclusion::migration::MigrateToV1<Runtime>, | |||
parachains_shared::migration::MigrateToV1<Runtime>, | |||
parachains_shared::migration::MigrateToV1<Runtime>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Formatting looks off?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine to me
…=rococo --target_dir=polkadot --pallet=polkadot_runtime_parachains::paras_inherent
@alindima Command |
Reported in #6161 (comment) Fixes a bug introduced in #5461, where the claim queue would contain entries even if the validator groups storage is empty (which happens during the first session). This PR sets the claim queue core count to be the minimum between the num_cores param and the number of validator groups TODO: - [x] prdoc - [x] unit test
Resolves #4776
This will enable proper core-sharing between paras, even if one of them is not producing blocks.
TODO:
Important note:
The
ttl
andmax_availability_timeouts
fields of the HostConfiguration are not removed in this PR, due to #64.Adding the workaround with the storage version check for every use of the active HostConfiguration in all runtime APIs would be insane, as it's used in almost all runtime APIs.
So even though the ttl and max_availability_timeouts fields will now be unused, they will remain part of the host configuration.
These will be removed in a separate PR once #64 is fixed. Tracked by #6067