Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: produce chunks in resharding v3 test #12196

Merged
merged 8 commits into from
Oct 11, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion chain/chain/src/resharding/manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,10 @@ impl ReshardingManager {
let block_hash = block.hash();
let block_height = block.header().height();
let prev_hash = block.header().prev_hash();
if !self.epoch_manager.will_shard_layout_change(prev_hash)? {
let next_block_has_new_shard_layout =
self.epoch_manager.will_shard_layout_change(prev_hash)?
&& self.epoch_manager.is_next_block_epoch_start(block.hash())?;
Comment on lines +51 to +53
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

omg

if !next_block_has_new_shard_layout {
return Ok(());
}

Expand Down
65 changes: 51 additions & 14 deletions chain/chain/src/stateless_validation/chunk_validation.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,20 +40,27 @@ use std::time::Instant;
pub enum MainTransition {
Genesis { chunk_extra: ChunkExtra, block_hash: CryptoHash, shard_id: ShardId },
NewChunk(NewChunkData),
// TODO(#11881): this is temporary indicator that resharding happened in the
// state transition covered by state witness. Won't happen in production
// until resharding release. Must be removed and replaced with proper
// resharding state transition validation.
ShardLayoutChange,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking was that the resharding would be in addition to the existing MainTransition. I think we will first need to apply the chunk (if present) and only then validate resharding.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I mean that we can have both new chunk and resharding at the same time

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that what you mean by temporary? If so I'm happy to approve to unblock the progress, but still I'm curious about your thought of the proper structure for this logic.

Copy link
Member Author

@Longarithm Longarithm Oct 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is very temporary. It is just the easiest way to skip everything related to chunk validation in our case. Made a comment.

}

impl MainTransition {
pub fn block_hash(&self) -> CryptoHash {
match self {
Self::Genesis { block_hash, .. } => *block_hash,
Self::NewChunk(data) => data.block.block_hash,
Self::ShardLayoutChange => panic!("block_hash called on ShardLayoutChange"),
}
}

pub fn shard_id(&self) -> ShardId {
match self {
Self::Genesis { shard_id, .. } => *shard_id,
Self::NewChunk(data) => data.chunk_header.shard_id(),
Self::ShardLayoutChange => panic!("shard_id called on ShardLayoutChange"),
}
}
}
Expand Down Expand Up @@ -112,7 +119,6 @@ pub fn pre_validate_chunk_state_witness(
runtime_adapter: &dyn RuntimeAdapter,
) -> Result<PreValidationOutput, Error> {
let store = chain.chain_store();
let shard_id = state_witness.chunk_header.shard_id();

// First, go back through the blockchain history to locate the last new chunk
// and last last new chunk for the shard.
Expand All @@ -122,16 +128,20 @@ pub fn pre_validate_chunk_state_witness(
// Blocks from the last last new chunk (exclusive) to the last new chunk (inclusive).
let mut blocks_after_last_last_chunk = Vec::new();

{
let last_chunk_shard_id = {
let mut block_hash = *state_witness.chunk_header.prev_block_hash();
let mut prev_chunks_seen = 0;
let mut current_shard_id = state_witness.chunk_header.shard_id();
let mut last_chunk_shard_id = current_shard_id;
loop {
let block = store.get_block(&block_hash)?;
current_shard_id = epoch_manager.get_prev_shard_id(&block_hash, current_shard_id)?;

let chunks = block.chunks();
let Some(chunk) = chunks.get(shard_id as usize) else {
let Some(chunk) = chunks.get(current_shard_id as usize) else {
return Err(Error::InvalidChunkStateWitness(format!(
"Shard {} does not exist in block {:?}",
shard_id, block_hash
current_shard_id, block_hash
)));
};
let is_new_chunk = chunk.is_new_chunk(block.header().height());
Expand All @@ -141,6 +151,7 @@ pub fn pre_validate_chunk_state_witness(
prev_chunks_seen += 1;
}
if prev_chunks_seen == 0 {
last_chunk_shard_id = current_shard_id;
blocks_after_last_chunk.push(block);
} else if prev_chunks_seen == 1 {
blocks_after_last_last_chunk.push(block);
Expand All @@ -149,12 +160,27 @@ pub fn pre_validate_chunk_state_witness(
break;
}
}
last_chunk_shard_id
};

let last_chunk_block = blocks_after_last_last_chunk.first().ok_or_else(|| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sanity check for myself

  • the blocks in blocks_after_last_last_chunk are in reverse chronological order
  • the blocks include the last chunk block (it's inclusive)

Error::Other("blocks_after_last_last_chunk is empty, this should be impossible!".into())
})?;
let last_chunk_shard_layout =
epoch_manager.get_shard_layout(&last_chunk_block.header().epoch_id())?;
let chunk_shard_layout = epoch_manager
.get_shard_layout_from_prev_block(state_witness.chunk_header.prev_block_hash())?;
if last_chunk_shard_layout != chunk_shard_layout {
return Ok(PreValidationOutput {
main_transition_params: MainTransition::ShardLayoutChange,
implicit_transition_params: Vec::new(),
});
}

let receipts_to_apply = validate_source_receipt_proofs(
&state_witness.source_receipt_proofs,
&blocks_after_last_last_chunk,
shard_id,
last_chunk_shard_id,
)?;
let applied_receipts_hash = hash(&borsh::to_vec(receipts_to_apply.as_slice()).unwrap());
if applied_receipts_hash != state_witness.applied_receipts_hash {
Expand All @@ -164,11 +190,8 @@ pub fn pre_validate_chunk_state_witness(
)));
}
let (tx_root_from_state_witness, _) = merklize(&state_witness.transactions);
let last_chunk_block = blocks_after_last_last_chunk.first().ok_or_else(|| {
Error::Other("blocks_after_last_last_chunk is empty, this should be impossible!".into())
})?;
let last_new_chunk_tx_root =
last_chunk_block.chunks().get(shard_id as usize).unwrap().tx_root();
last_chunk_block.chunks().get(last_chunk_shard_id as usize).unwrap().tx_root();
if last_new_chunk_tx_root != tx_root_from_state_witness {
return Err(Error::InvalidChunkStateWitness(format!(
"Transaction root {:?} does not match expected transaction root {:?}",
Expand Down Expand Up @@ -218,15 +241,26 @@ pub fn pre_validate_chunk_state_witness(
let epoch_id = last_chunk_block.header().epoch_id();
let congestion_info = last_chunk_block
.block_congestion_info()
.get(&shard_id)
.get(&last_chunk_shard_id)
.map(|info| info.congestion_info);
let genesis_protocol_version = epoch_manager.get_epoch_protocol_version(&epoch_id)?;
let chunk_extra =
chain.genesis_chunk_extra(shard_id, genesis_protocol_version, congestion_info)?;
MainTransition::Genesis { chunk_extra, block_hash: *last_chunk_block.hash(), shard_id }
let chunk_extra = chain.genesis_chunk_extra(
last_chunk_shard_id,
genesis_protocol_version,
congestion_info,
)?;
MainTransition::Genesis {
chunk_extra,
block_hash: *last_chunk_block.hash(),
shard_id: last_chunk_shard_id,
}
} else {
MainTransition::NewChunk(NewChunkData {
chunk_header: last_chunk_block.chunks().get(shard_id as usize).unwrap().clone(),
chunk_header: last_chunk_block
.chunks()
.get(last_chunk_shard_id as usize)
.unwrap()
.clone(),
transactions: state_witness.transactions.clone(),
receipts: receipts_to_apply,
block: Chain::get_apply_chunk_block_context(
Expand Down Expand Up @@ -411,6 +445,9 @@ pub fn validate_chunk_state_witness(

(chunk_extra, outgoing_receipts)
}
(MainTransition::ShardLayoutChange, _) => {
panic!("shard layout change should not be validated")
}
(_, Some(result)) => (result.chunk_extra, result.outgoing_receipts),
};
if chunk_extra.state_root() != &state_witness.main_state_transition.post_state_root {
Expand Down
13 changes: 13 additions & 0 deletions chain/client/src/stateless_validation/chunk_validator/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,19 @@ impl ChunkValidator {
let chunk_header = state_witness.chunk_header.clone();
let network_sender = self.network_sender.clone();
let epoch_manager = self.epoch_manager.clone();
if matches!(
pre_validation_result.main_transition_params,
chunk_validation::MainTransition::ShardLayoutChange
) {
send_chunk_endorsement_to_block_producers(
&chunk_header,
epoch_manager.as_ref(),
signer,
&network_sender,
);
return Ok(());
}

// If we have the chunk extra for the previous block, we can validate the chunk without state witness.
// This usually happens because we are a chunk producer and
// therefore have the chunk extra for the previous block saved on disk.
Expand Down
18 changes: 13 additions & 5 deletions integration-tests/src/test_loop/tests/resharding_v3.rs
Original file line number Diff line number Diff line change
Expand Up @@ -101,17 +101,25 @@ fn test_resharding_v3() {
let success_condition = |test_loop_data: &mut TestLoopData| -> bool {
let client = &test_loop_data.get(&client_handle).client;
let tip = client.chain.head().unwrap();

// Check that all chunks are included.
let block_header = client.chain.get_block_header(&tip.last_block_hash).unwrap();
assert!(block_header.chunk_mask().iter().all(|chunk_bit| *chunk_bit));

// Return true if we passed an epoch with increased number of shards.
let epoch_height =
client.epoch_manager.get_epoch_height_from_prev_block(&tip.prev_block_hash).unwrap();
assert!(epoch_height < 5);
let epoch_config = client.epoch_manager.get_epoch_config(&tip.epoch_id).unwrap();
return epoch_config.shard_layout.shard_ids().count() == expected_num_shards;
assert!(epoch_height < 6);
let prev_epoch_id =
client.epoch_manager.get_prev_epoch_id_from_prev_block(&tip.prev_block_hash).unwrap();
let epoch_config = client.epoch_manager.get_epoch_config(&prev_epoch_id).unwrap();
epoch_config.shard_layout.shard_ids().count() == expected_num_shards
};

test_loop.run_until(
success_condition,
// Give enough time to produce ~6 epochs.
Duration::seconds((6 * epoch_length) as i64),
// Give enough time to produce ~7 epochs.
Duration::seconds((7 * epoch_length) as i64),
);

TestLoopEnv { test_loop, datas: node_datas, tempdir }
Expand Down
Loading