-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Historical evm events #173
Conversation
Warning Rate limit exceeded@ryardley has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 35 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThe changes in this pull request primarily focus on modifying the Changes
Assessment against linked issues
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
06d1d8b
to
d6df850
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (23)
packages/ciphernode/evm/tests/fixtures/emit_logs.sol (2)
9-10
: Consider using explicit initialization for better clarity.While the current initialization is valid, consider using the constructor for explicit initialization of all state variables.
- uint256 count = 0; + uint256 count; + + constructor() { + _value = ""; + count = 0; + }
20-21
: Verify event emission sequence for historical event processing.The count increment and event emission logic looks correct. This will help test the historical event processing by providing a deterministic sequence of events.
Two suggestions to enhance testing capabilities:
- Consider adding a getter for
count
to verify the state in tests- Consider adding events for edge cases (e.g., when same value is set multiple times)
+ function getCount() view public returns (uint256) { + return count; + }packages/ciphernode/evm/src/lib.rs (1)
13-13
: LGTM! Consider adding documentation.The expanded exports align well with the PR objectives, making necessary types public for historical event processing and state management. The addition of
EnclaveEvmEvent
andEvmEventReaderState
supports the new block storage and event tracking functionality.Consider adding documentation comments (///) above this export statement to explain the purpose of each exported type, particularly the newly added ones. This would help users understand when to use
EnclaveEvmEvent
vsEvmEventReader
, and the role ofEvmEventReaderState
in managing historical events.packages/ciphernode/enclave_node/src/datastore.rs (1)
10-12
: Consider enhancing error handling and validation.The function could benefit from more robust error handling:
- Add explicit validation for the database file path
- Handle actor startup failures explicitly
- Consider adding logging for initialization steps
Example improvement:
pub fn get_sled_store(bus: &Addr<EventBus>, db_file: &PathBuf) -> Result<DataStore> { + if !db_file.parent().map_or(false, |p| p.exists()) { + return Err(anyhow::anyhow!("Parent directory for database does not exist")); + } + let store = SledStore::new(bus, db_file)?; + let actor = store.start(); Ok((&actor).into()) }packages/ciphernode/data/src/snapshot.rs (1)
23-23
: Consider adding documentation about ownership semantics.Since this is a breaking change in the API contract, it would be helpful to document why the method returns a reference and what that implies for implementors.
Consider adding documentation like this:
+ /// Returns a reference to the Repository instance owned by this type. + /// The repository maintains the snapshot state and implementors are responsible + /// for managing the repository's lifecycle. fn repository(&self) -> &Repository<Self::Snapshot>;packages/ciphernode/enclave_node/src/ciphernode.rs (1)
48-63
: Architectural improvements align well with PR objectives.The changes effectively support historical event processing by:
- Using deploy_block to define the starting point for historical events
- Implementing chain-specific repository readers for better state isolation
- Maintaining consistency across both EnclaveSolReader and CiphernodeRegistrySol
packages/ciphernode/router/src/repositories.rs (1)
77-82
: Add documentation for historical event processing.The implementation looks good and follows the repository pattern consistently. Consider adding documentation to explain this method's role in processing historical EVM events and caching.
+ /// Returns a repository for tracking Enclave contract events on the specified chain. + /// This repository stores the last processed block and processed event IDs + /// to ensure proper handling of both historical and live events. pub fn enclave_sol_reader(&self, chain_id: u64) -> Repository<EvmEventReaderState> {packages/ciphernode/enclave_node/src/aggregator.rs (1)
57-61
: Good architectural improvements in EnclaveSol attachment!The changes enhance the contract interaction by:
- Using explicit contract address instead of contract reference
- Adding chain-specific repository reader
- Including deploy block information
This supports proper historical event processing and enables better event tracking from specific blocks.
Consider documenting these parameter requirements in the contract attachment methods to ensure future modifications maintain this pattern of explicit addressing and chain-specific tracking.
packages/ciphernode/evm/src/enclave_sol_reader.rs (1)
104-105
: Consider enhancing the log message with start block information.When a start block is provided, it would be helpful to include this information in the log message for better observability.
-info!(address=%contract_address, "EnclaveSolReader is listening to address"); +info!( + address=%contract_address, + start_block=?start_block, + "EnclaveSolReader is listening to address" +);packages/ciphernode/sortition/src/sortition.rs (2)
143-144
: LGTM! Memory optimization through reference return.The change from returning an owned
Repository
to returning a reference is a good optimization that avoids unnecessary cloning. This aligns with similar optimizations in other components likePlaintextAggregator
andPublicKeyAggregator
.Consider applying similar reference-based optimizations to other methods that might be returning cloned data structures unnecessarily.
Line range hint
34-37
: Consider improving error handling in address parsing.The current implementation uses
unwrap()
on address parsing which could panic if an invalid address is provided. Consider using proper error handling:let registered_nodes: Vec<Address> = self .nodes .clone() .into_iter() - // TODO: error handling - .map(|b| b.parse().unwrap()) + .map(|b| b.parse().map_err(|e| anyhow!("Invalid address {}: {}", b, e))) + .collect::<Result<Vec<_>>>()?;packages/ciphernode/router/src/context.rs (1)
189-190
: LGTM: Efficient reference-based implementation.The change to return a reference instead of cloning the repository is a good optimization that reduces unnecessary memory allocations while maintaining safety through Rust's ownership system.
Consider adding a doc comment to explicitly indicate that this returns a reference to the underlying store, which can help future maintainers understand the ownership semantics:
+ /// Returns a reference to the underlying repository store. fn repository(&self) -> &Repository<E3RequestContextSnapshot> { &self.store }
packages/ciphernode/router/src/e3_request_router.rs (2)
Line range hint
4-22
: Document historical event handling in EventBuffer.Given that this PR introduces support for historical events, it would be helpful to document how
EventBuffer
handles the ordering between historical and live events. Consider adding documentation that explains:
- How events are ordered (historical vs. live)
- Any special handling for historical events
- Potential edge cases in event ordering
Line range hint
208-234
: Consider improving snapshot restoration resilience.The current snapshot restoration skips contexts when
repositories.context(&e3_id).read().await?
returnsNone
. While this is acceptable as per previous learnings, consider:
- Logging when contexts are skipped to aid debugging
- Adding metrics to track snapshot restoration success rate
Example improvement:
for e3_id in snapshot.contexts { let Some(ctx_snapshot) = repositories.context(&e3_id).read().await? else { + tracing::warn!("Skipping context restoration for e3_id: {}", e3_id); continue; };
packages/ciphernode/config/src/app_config.rs (2)
32-38
: Remove unnecessary clone() call.The
clone()
call indeploy_block()
is unnecessary sinceu64
implementsCopy
. The Option will be copied automatically.pub fn deploy_block(&self) -> Option<u64> { use Contract::*; match self { - Full { deploy_block, .. } => deploy_block.clone(), + Full { deploy_block, .. } => *deploy_block, AddressOnly(_) => None, } }
43-45
: Add documentation for Contract fields.The fields have been updated to use the new
Contract
type, but there's no documentation explaining the significance of deployment blocks or when to useFull
vsAddressOnly
variants.#[derive(Debug, Deserialize, Serialize)] pub struct ContractAddresses { + /// Contract address and optional deployment block for historical event processing. + /// Use Full variant when historical events need to be processed from a specific block. pub enclave: Contract, + /// Contract address and optional deployment block for the registry. pub ciphernode_registry: Contract, + /// Contract address and optional deployment block for the filter registry. pub filter_registry: Contract, }packages/ciphernode/core/src/events.rs (1)
555-555
: Consider updating Display implementation to include entropy field.The
Display
implementation forTestEvent
currently only shows themsg
field. For debugging purposes, it might be helpful to include theentropy
field in the string representation.impl Display for TestEvent { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - write!(f, "TestEvent(msg: {})", self.msg) + write!(f, "TestEvent(msg: {}, entropy: {})", self.msg, self.entropy) } }packages/ciphernode/evm/src/ciphernode_registry_sol.rs (2)
107-108
: Add documentation for new parameters inattach
methodThe
attach
method now includes new parametersrepository
andstart_block
. Updating the method documentation to explain their purposes would improve code readability and maintainability.
137-138
: Add documentation for new parameters inattach
methodThe
attach
method inCiphernodeRegistrySol
now includes new parametersrepository
andstart_block
. Consider updating the method documentation to clarify the roles of these parameters.packages/ciphernode/evm/tests/evm_reader.rs (3)
23-41
: Handle potential panic fromunwrap()
in type conversion.In the
test_event_extractor
function, usingtry_into().unwrap()
at line 35 can cause a panic if the conversion fails. To improve robustness and prevent unexpected panics during test execution, consider handling the potential error.You could modify the code to safely handle the conversion result:
msg: event.value, - entropy: event.count.try_into().unwrap(), // This prevents de-duplication in tests + entropy: match event.count.try_into() { + Ok(entropy) => entropy, + Err(_) => { + // Handle the error appropriately, e.g., return None or provide a default value + return None; + } + }, // This prevents de-duplication in tests
47-52
: Refactor repetitive setup code in tests.The code for initializing the
Anvil
instance and the provider is duplicated across multiple test functions (evm_reader
,ensure_historical_events
, andensure_resume_after_shutdown
). Refactoring this setup into a helper function will enhance maintainability and reduce code duplication.Example of a shared setup function:
async fn setup_provider() -> Result<(AnvilInstance, ProviderWithChainId)> { // Create a WS provider // NOTE: Anvil must be available on $PATH let anvil = Anvil::new().block_time(1).try_spawn()?; let provider = WithChainId::new( ProviderBuilder::new() .on_ws(WsConnect::new(anvil.ws_endpoint())) .await?, ) .await?; Ok((anvil, provider)) }Then, in your test functions, you can call:
let (anvil, provider) = setup_provider().await?;Also applies to: 105-110, 171-176
54-54
: Use robust synchronization instead of arbitrary sleep durations.Relying on
sleep(Duration::from_millis(1)).await;
to wait for asynchronous operations might lead to flaky tests if the operations take longer than expected. To ensure tests are reliable, consider using proper synchronization mechanisms that await the completion of asynchronous tasks.For example, instead of:
sleep(Duration::from_millis(1)).await;You could await the processing of events:
// Wait until the expected number of events have been received while bus.send(GetHistory).await?.len() < expected_event_count { sleep(Duration::from_millis(10)).await; }Or use notification channels or other synchronization primitives to signal when processing is complete.
Also applies to: 146-146, 224-224, 236-236, 250-250, 266-266
packages/ciphernode/evm/src/event_reader.rs (1)
283-283
: Optimize checkpointing frequency to reduce I/O overheadCalling
self.checkpoint()
after processing every event may introduce significant I/O overhead due to frequent disk writes, which can impact performance.Consider batching checkpoint operations or implementing a debounce mechanism to reduce the frequency of disk writes. For example, you could checkpoint after processing a certain number of events or at regular time intervals.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
packages/ciphernode/Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (23)
packages/ciphernode/aggregator/src/plaintext_aggregator.rs
(1 hunks)packages/ciphernode/aggregator/src/publickey_aggregator.rs
(1 hunks)packages/ciphernode/config/src/app_config.rs
(1 hunks)packages/ciphernode/core/src/events.rs
(1 hunks)packages/ciphernode/data/src/snapshot.rs
(1 hunks)packages/ciphernode/enclave_node/src/aggregator.rs
(1 hunks)packages/ciphernode/enclave_node/src/ciphernode.rs
(1 hunks)packages/ciphernode/enclave_node/src/datastore.rs
(1 hunks)packages/ciphernode/evm/Cargo.toml
(1 hunks)packages/ciphernode/evm/src/ciphernode_registry_sol.rs
(2 hunks)packages/ciphernode/evm/src/enclave_sol.rs
(2 hunks)packages/ciphernode/evm/src/enclave_sol_reader.rs
(2 hunks)packages/ciphernode/evm/src/event_reader.rs
(3 hunks)packages/ciphernode/evm/src/helpers.rs
(0 hunks)packages/ciphernode/evm/src/lib.rs
(1 hunks)packages/ciphernode/evm/tests/evm_reader.rs
(3 hunks)packages/ciphernode/evm/tests/fixtures/emit_logs.sol
(2 hunks)packages/ciphernode/keyshare/src/keyshare.rs
(1 hunks)packages/ciphernode/router/Cargo.toml
(1 hunks)packages/ciphernode/router/src/context.rs
(2 hunks)packages/ciphernode/router/src/e3_request_router.rs
(1 hunks)packages/ciphernode/router/src/repositories.rs
(2 hunks)packages/ciphernode/sortition/src/sortition.rs
(1 hunks)
💤 Files with no reviewable changes (1)
- packages/ciphernode/evm/src/helpers.rs
🧰 Additional context used
📓 Learnings (4)
packages/ciphernode/evm/src/enclave_sol.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/evm/src/enclave_sol_reader.rs:87-89
Timestamp: 2024-10-22T03:39:29.448Z
Learning: In the `ciphernode` project, specifically in `packages/ciphernode/evm/src/enclave_sol_reader.rs`, the method `EnclaveSolReader::attach` should be retained even if it directly calls `EvmEventReader::attach` without additional processing. Avoid suggesting its removal in future reviews.
packages/ciphernode/evm/src/enclave_sol_reader.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/evm/src/enclave_sol_reader.rs:87-89
Timestamp: 2024-10-22T03:39:29.448Z
Learning: In the `ciphernode` project, specifically in `packages/ciphernode/evm/src/enclave_sol_reader.rs`, the method `EnclaveSolReader::attach` should be retained even if it directly calls `EvmEventReader::attach` without additional processing. Avoid suggesting its removal in future reviews.
packages/ciphernode/router/src/e3_request_router.rs (2)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/router/src/e3_request_router.rs:126-133
Timestamp: 2024-10-22T03:44:33.301Z
Learning: In `E3RequestRouter::handle` method in `e3_request_router.rs`, `self.repository()` cannot fail, so it's not necessary to handle errors when accessing repositories via `self.repository().repositories()`.
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/router/src/e3_request_router.rs:202-235
Timestamp: 2024-10-22T03:47:04.014Z
Learning: In `packages/ciphernode/router/src/e3_request_router.rs`, within the `E3RequestRouter::from_snapshot` method, errors during context restoration propagate upwards due to the `?` operator, and skipping contexts when `repositories.context(&e3_id).read().await?` returns `Ok(None)` is acceptable, as missing snapshots are expected.
packages/ciphernode/router/src/repositories.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/router/src/repositories.rs:40-71
Timestamp: 2024-10-22T03:41:21.226Z
Learning: In `packages/ciphernode/router/src/repositories.rs`, prefer to keep method implementations as they are if they are straightforward and maintainable, even if refactoring could reduce duplication.
🔇 Additional comments (26)
packages/ciphernode/evm/tests/fixtures/emit_logs.sol (1)
5-5
: Event signature change aligns with historical event tracking requirements.
The updated event signature with count
parameter will help validate the order and completeness of historical event processing, which is crucial for the PR's objective of ensuring correct historical event handling.
Let's verify if the event is properly handled in the test files:
✅ Verification successful
Event signature change is properly integrated with test infrastructure
The ValueChanged
event with the count
parameter is correctly handled in the test infrastructure:
- The event is properly decoded in
evm_reader.rs
usingdecode_log_data
- The
count
field is utilized as entropy to prevent event deduplication in tests - The contract is properly built and its ABI is generated via the build script
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for ValueChanged event handling in test files
rg -t rust "ValueChanged.*count.*value" packages/ciphernode/
Length of output: 62
Script:
#!/bin/bash
# Let's try a broader search for ValueChanged event references
rg "ValueChanged" packages/ciphernode/ -A 3
# Also search for emit_logs contract usage
rg "emit_logs" packages/ciphernode/ -i
Length of output: 1465
Script:
#!/bin/bash
# Let's examine the event handling implementation in evm_reader.rs
rg -A 10 "EmitLogs::ValueChanged" packages/ciphernode/evm/tests/evm_reader.rs
# Also check the test contract build script
cat packages/ciphernode/evm/scripts/build_fixtures.sh
Length of output: 797
packages/ciphernode/router/Cargo.toml (1)
14-14
: LGTM! Verify the EVM package path.
The addition of the EVM dependency aligns with the PR objectives for implementing historical EVM event processing.
Let's verify the EVM package exists at the specified path:
✅ Verification successful
Dependency path verified successfully
The EVM package exists at the expected path packages/ciphernode/evm
and its Cargo.toml confirms the package name as "evm", validating the relative path dependency declaration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the existence of the EVM package and its Cargo.toml
# Expected: The package should exist at ../evm relative to the router package
# Check if the EVM package directory exists
if [ -f "packages/ciphernode/evm/Cargo.toml" ]; then
echo "✓ EVM package found at the specified path"
# Show the package name to confirm it's the correct package
grep "name" packages/ciphernode/evm/Cargo.toml
else
echo "✗ EVM package not found at packages/ciphernode/evm/"
fi
Length of output: 200
packages/ciphernode/evm/Cargo.toml (2)
11-11
: LGTM: Workspace dependency additions are appropriate.
The addition of async-trait
and serde
as workspace dependencies aligns well with the PR objectives:
async-trait
will support the async event processing functionalityserde
enables state persistence needed for event caching and block number tracking
Also applies to: 19-19
22-24
: LGTM: Dev-dependency properly separated.
Good practice to add enclave_node
as a dev-dependency since it's only needed for testing the new functionality.
packages/ciphernode/evm/src/enclave_sol.rs (3)
4-4
: LGTM: Required imports added for state management.
The new imports support the state management requirements for historical event processing.
Also applies to: 9-9
22-22
: LGTM: Correctly forwards parameters to EnclaveSolReader.
The implementation maintains proper error handling while forwarding the new parameters, which aligns with the retrieved learning about keeping this method as a pass-through.
19-20
: Verify state persistence across actor restarts.
The new parameters enable historical event processing and state management as intended. However, we should verify that the state persists correctly across actor restarts.
✅ Verification successful
Let me analyze the results and generate additional verification scripts to check state persistence implementation details.
The initial results show the repository and state structure, but we need to verify the actual persistence implementation details in the event reader to ensure proper state handling across restarts.
Based on the results, I can now provide a final response since we have clear evidence of proper state persistence implementation through the Snapshot
, Checkpoint
, and FromSnapshotWithParams
traits for EvmEventReader
.
State persistence is properly implemented for actor restarts
The implementation includes:
Snapshot
trait for state serializationCheckpoint
trait providing repository accessFromSnapshotWithParams
trait for state restoration with parameters- Deduplication of events using
state.ids
HashSet that persists across restarts
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the repository and start_block parameters are used consistently across the codebase
# and that state persistence is properly implemented.
# Test 1: Check for repository state loading/saving patterns
echo "Checking repository state handling patterns..."
rg -A 5 "Repository<EvmEventReaderState>"
# Test 2: Verify start_block usage in event processing
echo "Checking start_block usage in event processing..."
rg -A 5 "start_block.*Option<u64>"
# Test 3: Look for state restoration patterns
echo "Checking state restoration implementations..."
ast-grep --pattern 'fn restore($$$) { $$ }'
Length of output: 9060
Script:
#!/bin/bash
# Check for state persistence implementation in EvmEventReader
echo "Checking EvmEventReader state persistence..."
rg -A 10 "impl.*EvmEventReader"
# Look for state saving/loading patterns
echo "Checking state save/load patterns..."
rg -A 10 "fn handle.*Message"
# Check for repository usage in event processing
echo "Checking repository usage in event processing..."
rg -A 5 "repository\..*save|repository\..*load"
Length of output: 6456
packages/ciphernode/enclave_node/src/datastore.rs (1)
18-23
: 🛠️ Refactor suggestion
Ensure consistent error handling across store initialization.
The error handling should be consistent between both store types. Currently, only the sled store path handles potential errors.
Let's verify the historical event processing capabilities:
packages/ciphernode/data/src/snapshot.rs (1)
23-23
: LGTM! Improved memory efficiency by returning repository reference.
The change from returning an owned Repository
to returning a reference is a good optimization that avoids unnecessary cloning. This is more idiomatic Rust and aligns well with the trait's purpose of providing access to repository state.
Let's verify that all implementations have been updated to match this signature change:
✅ Verification successful
All implementations correctly return repository references - change verified
The search results confirm that all implementations of the Checkpoint
trait have been properly updated to return &Repository
references:
Sortition
: returns&Repository<SortitionModule>
Keyshare
: returns&Repository<KeyshareState>
E3RequestRouter
: returns&Repository<E3RequestRouterSnapshot>
EvmEventReader
: returns&Repository<Self::Snapshot>
E3RequestContext
: returns&Repository<E3RequestContextSnapshot>
PublicKeyAggregator
: returns&Repository<PublicKeyAggregatorState>
PlaintextAggregator
: returns&Repository<PlaintextAggregatorState>
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all implementations of the Checkpoint trait to ensure they've been updated
# to return repository references instead of owned values
# Search for Checkpoint implementations
ast-grep --pattern 'impl$_ Checkpoint for $_'
# Search for repository method implementations
ast-grep --pattern 'fn repository(&self) -> Repository<'
Length of output: 106
Script:
#!/bin/bash
# Let's try a more comprehensive search using ripgrep to find implementations
# and repository method signatures
# Search for Checkpoint implementations
echo "=== Searching for Checkpoint implementations ==="
rg "impl.*Checkpoint.*for" -A 10
# Search for repository method implementations with different patterns
echo -e "\n=== Searching for repository method implementations ==="
rg "fn repository.*->.*Repository"
Length of output: 5757
packages/ciphernode/enclave_node/src/ciphernode.rs (1)
48-63
: Verify error handling for deploy block and chain ID access.
Please ensure proper error handling exists for:
- Cases where deploy_block might not be available
- Scenarios where chain ID retrieval from provider could fail
✅ Verification successful
Let me generate additional scripts to check for error handling patterns more thoroughly.
Based on the gathered information, I can now provide the final response:
Error handling for deploy_block and chain_id is properly implemented
The code shows appropriate error handling:
deploy_block()
returns anOption<u64>
which safely handles cases where deploy block is not available- Chain ID retrieval is handled through the Result type and propagated with the
?
operator - The
attach
functions properly propagate errors through the async Result type
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for error handling patterns around deploy_block and chain_id
echo "Checking deploy_block error handling:"
rg -A 5 "deploy_block"
echo "Checking chain_id error handling:"
rg -A 5 "get_chain_id"
# Look for potential error handling implementations
ast-grep --pattern 'Result<$_>'
Length of output: 7638
Script:
#!/bin/bash
# Check implementation of Contract and error handling around deploy_block
ast-grep --pattern 'impl Contract {
$$$
}'
# Check implementation of RootProvider and error handling around chain_id
ast-grep --pattern 'impl$_ RootProvider$_ {
$$$
}'
# Look for error types and handling
rg -A 3 "type Error = "
rg -A 3 "type Result"
# Check attach method implementations
ast-grep --pattern 'async fn attach($$$) -> Result<$_> {
$$$
}'
Length of output: 20025
packages/ciphernode/router/src/repositories.rs (2)
5-5
: LGTM!
The import is correctly placed and properly used by the new repository methods.
77-88
: Verify chain ID validation.
Both methods accept a chain ID parameter which is crucial for multi-chain support. We should ensure that invalid chain IDs are properly handled either here or in the calling code.
Let's check how chain IDs are validated:
packages/ciphernode/enclave_node/src/aggregator.rs (3)
52-52
: Whitespace change noted.
63-68
: LGTM! Consistent implementation with EnclaveSol.
The CiphernodeRegistrySol attachment follows the same robust pattern as EnclaveSol, properly supporting historical event processing with chain-specific repository reader and deploy block tracking.
57-68
: Verify repository reader implementations.
Let's ensure the repository readers are properly implemented for historical event processing.
✅ Verification successful
Let me gather more information about the event processing implementation since the previous results don't give us the complete picture.
Based on the gathered information, I can now provide a final response as the implementation details are clear.
Repository readers are correctly implemented for historical event processing
The implementation shows proper handling of historical events with:
- Repository readers correctly scoped per chain ID in
packages/ciphernode/router/src/repositories.rs
EvmEventReader
implementation includes:- State tracking with
last_block
inEvmEventReaderState
- Historical event processing using
deploy_block
as the starting point - Proper error handling and logging for historical events
- Seamless transition to live event processing after historical sync
- State tracking with
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify repository reader implementations
# Check for reader implementations
echo "Checking repository reader implementations..."
rg -A 5 "fn \w+_reader"
# Check for historical event processing in readers
echo "Checking historical event processing..."
ast-grep --pattern 'impl $_ {
$$$
fn process_historical_events($$$) {
$$$
}
$$$
}'
Length of output: 1844
Script:
#!/bin/bash
# Check for EvmEventReaderState implementation and usage
echo "Checking EvmEventReaderState implementation..."
rg -A 10 "struct EvmEventReaderState"
# Check for historical block processing in EVM related code
echo "Checking historical block processing..."
rg -A 10 "deploy_block|historical|last_block"
# Look for event processing implementations
echo "Checking event processing implementations..."
ast-grep --pattern 'impl $_ {
$$$
async fn process_events($$$) {
$$$
}
$$$
}'
Length of output: 13339
packages/ciphernode/evm/src/enclave_sol_reader.rs (2)
1-1
: LGTM: Import changes are appropriate.
The new imports align with the functionality changes, adding support for repository state management and enhanced logging.
Also applies to: 9-9, 11-11
91-92
: LGTM: Method signature and implementation changes align with PR objectives.
The changes effectively support historical event processing by introducing repository state management and configurable start block.
Also applies to: 94-102
packages/ciphernode/router/src/context.rs (1)
146-146
: LGTM: Appropriate handling of repository cloning.
The implementation correctly clones the repository before converting it into Repositories, maintaining proper ownership semantics while aligning with the new reference-based repository access pattern.
packages/ciphernode/keyshare/src/keyshare.rs (1)
88-89
: LGTM! Efficient optimization of repository access.
The change from returning a cloned Repository<KeyshareState>
to returning a reference is a good optimization that reduces unnecessary memory allocation and copying.
Let's verify that this pattern is consistently applied across other implementations:
✅ Verification successful
LGTM! The repository method implementation is consistent across the codebase.
The verification confirms that all repository
method implementations across the codebase follow the same pattern, returning references to their respective Repository types:
packages/ciphernode/sortition/src/sortition.rs
:&Repository<SortitionModule>
packages/ciphernode/keyshare/src/keyshare.rs
:&Repository<KeyshareState>
packages/ciphernode/evm/src/event_reader.rs
:&Repository<Self::Snapshot>
packages/ciphernode/aggregator/src/publickey_aggregator.rs
:&Repository<PublicKeyAggregatorState>
packages/ciphernode/aggregator/src/plaintext_aggregator.rs
:&Repository<PlaintextAggregatorState>
packages/ciphernode/router/src/e3_request_router.rs
:&Repository<E3RequestRouterSnapshot>
packages/ciphernode/router/src/context.rs
:&Repository<E3RequestContextSnapshot>
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent repository method implementations
# Expect: All repository methods should return references
# Search for repository method implementations
ast-grep --pattern 'fn repository(&self) -> $return_type {
$$$
}'
Length of output: 69
Script:
#!/bin/bash
# Let's try a different approach using ripgrep to find repository method implementations
rg -U "fn repository.*\{[\s\S]*?\}" --type rust -A 2
Length of output: 2302
packages/ciphernode/aggregator/src/plaintext_aggregator.rs (1)
239-240
: Approve memory optimization, verify API compatibility.
The change to return a reference instead of cloning the repository is a good optimization. However, ensure that all consumers of this API are updated to handle references instead of owned values.
Let's verify similar changes across the codebase:
packages/ciphernode/aggregator/src/publickey_aggregator.rs (2)
Line range hint 1-265
: Verify alignment with historical event processing requirements.
Based on the PR objectives about handling historical events and caching processed event IDs, this implementation might need additional functionality. Consider:
- Adding a mechanism to track the last processed block
- Implementing event ID caching in the state
- Adding logic to handle historical event replay
This would help fulfill the requirements mentioned in issue #171 about processing historical events correctly.
#!/bin/bash
# Description: Check for historical event processing implementation across the codebase
# Test 1: Look for block tracking implementation
rg "last_processed_block|block_number"
# Test 2: Check for event caching mechanisms
rg "event.*cache|processed.*events"
264-265
: Verify thread safety implications of the reference return.
While changing repository
to return a reference instead of a clone improves memory efficiency, we need to ensure this doesn't introduce potential thread safety issues in the actor model. The Repository
type should be thread-safe and implement appropriate synchronization mechanisms.
✅ Verification successful
Based on the shell script results, I can now provide a definitive response about the thread safety implications:
The reference return in repository()
is thread-safe and correctly implemented
The codebase analysis reveals:
Repository<T>
is a thread-safe wrapper aroundDataStore
with phantom type parameter- The actor model is properly maintained as:
PublicKeyAggregator
owns its store instance- The repository method only provides read access through references
- All state mutations happen through actor message handling
- The underlying
DataStore
is designed for concurrent access, with proper scoping mechanisms
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if Repository implements Send/Sync traits and verify its usage pattern
# across the codebase
# Test 1: Check if Repository implements Send/Sync
rg -A 5 "impl.*Repository"
# Test 2: Check for potential concurrent access patterns
rg "repository\(\)" -A 5
Length of output: 640
Script:
#!/bin/bash
# Let's try a more targeted approach to verify thread safety
# Check Repository type definition and implementations
rg "struct Repository" -A 10
# Check where Repository is used in method signatures
rg "Repository<.*>" -A 3
# Look for any Send/Sync trait bounds or implementations
rg "(\+\s*Send|\+\s*Sync)" -A 2
# Check for any concurrent access patterns in the aggregator
ast-grep --pattern 'impl PublicKeyAggregator {
$$$
async fn $_($$$) {
$$$self.store$$$
}
}'
Length of output: 17801
packages/ciphernode/router/src/e3_request_router.rs (1)
193-194
: LGTM! Performance improvement by returning a reference.
The change to return a reference instead of cloning the repository is a good optimization. This aligns with similar changes made in other components and is safe since self.repository()
cannot fail.
packages/ciphernode/config/src/app_config.rs (1)
13-45
: Verify deployment block usage for historical events.
The changes enable storing deployment blocks, which is crucial for historical event processing. Let's verify that this information is properly utilized in the event processing logic.
✅ Verification successful
Deploy block information is properly integrated with historical event processing
The verification shows that:
- The
deploy_block
is correctly passed from contract configurations to the event readers in bothaggregator.rs
andciphernode.rs
- The event reader implementation properly uses this as
start_block
parameter when creating historical event filters - The historical event fetching logic uses
start_block.unwrap_or(0)
as the starting point for the historical filter, ensuring no events are missed while maintaining efficiency
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that deployment blocks are used in event processing logic
# Check for event processing implementations that should use deploy_block
echo "Checking event processing implementations..."
rg -l "deploy_block" --type rust
# Look for EvmEventReader usage patterns
echo "Checking EvmEventReader usage..."
ast-grep --pattern 'struct EvmEventReader {
$$$
}'
# Check for historical event processing logic
echo "Checking historical event processing..."
rg -A 5 "historical.*events?" --type rust
Length of output: 3119
Script:
#!/bin/bash
# Let's check how deploy_block is used in the event processing logic
# Check deploy_block usage in aggregator and ciphernode
echo "Checking deploy_block usage in event processing..."
rg -A 5 "deploy_block" packages/ciphernode/enclave_node/src/aggregator.rs packages/ciphernode/enclave_node/src/ciphernode.rs
# Check event reader implementation for block number handling
echo "Checking event reader implementation..."
rg -A 10 "struct EvmEventReader" packages/ciphernode/evm/src/event_reader.rs
# Check how historical event fetching is implemented
echo "Checking historical event fetching implementation..."
rg -A 10 "fetch_historical" packages/ciphernode/evm/src/event_reader.rs
Length of output: 3411
Script:
#!/bin/bash
# Let's check the historical event fetching implementation more thoroughly
# Search for the fetch_historical method implementation
echo "Checking fetch_historical implementation..."
rg -B 5 -A 15 "async.*fetch_historical" packages/ciphernode/evm/src/event_reader.rs
# Check how start_block is used in event reader
echo "Checking start_block usage..."
rg -B 2 -A 5 "start_block" packages/ciphernode/evm/src/event_reader.rs
# Check contract attachment implementation
echo "Checking contract attachment..."
rg -B 2 -A 10 "attach.*provider.*deploy_block" packages/ciphernode/evm/src/
Length of output: 3087
packages/ciphernode/core/src/events.rs (1)
555-555
: Verify relationship between TestEvent changes and PR objectives.
The PR objectives focus on processing historical and live EVM events, but the changes only show the addition of an entropy
field to TestEvent
. Could you clarify how this change relates to the EVM event processing goals mentioned in the PR description?
✅ Verification successful
The entropy
field in TestEvent
is directly related to EVM event processing testing
The TestEvent
is actively used in EVM event processing tests (packages/ciphernode/evm/tests/evm_reader.rs
), where:
- It's used to convert EVM contract events into enclave events for testing
- The
entropy
field specifically prevents event deduplication in tests by using the event's count:entropy: event.count.try_into().unwrap()
- It helps verify both historical and live event processing by tracking messages from EVM events
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for TestEvent usage in test files to understand its role
# in EVM event processing
echo "Searching for TestEvent usage in test files..."
rg -l "TestEvent" --type rust --type test
echo "Searching for EVM-related test scenarios using TestEvent..."
rg -A 5 "TestEvent.*entropy.*EVM" --type rust --type test
Length of output: 391
Script:
#!/bin/bash
# Let's try a different approach to find TestEvent usage and its relationship with EVM
# Search for TestEvent struct definition and usage
echo "Searching for TestEvent definition and usage..."
rg -A 10 "struct TestEvent" --type rust
# Search for EVM-related code near TestEvent
echo "Searching for EVM mentions near TestEvent..."
rg -B 5 -A 5 "TestEvent" --type rust | rg -B 5 -A 5 -i "evm"
# Search for test modules using TestEvent
echo "Searching for test modules..."
rg -A 5 "mod test" --type rust | rg -A 10 "TestEvent"
Length of output: 5429
packages/ciphernode/evm/src/event_reader.rs (1)
189-189
: Verify event continuity between historical and live streams
Setting from_block
to BlockNumberOrTag::Latest
in the live event subscription may result in missing events that occur between the last processed historical block and the current latest block at the time of subscription.
Ensure that the live event subscription starts from the last block number processed during the historical fetch to prevent any gaps in event processing. This can be achieved by adjusting the from_block
parameter:
- .from_block(BlockNumberOrTag::Latest);
+ .from_block(BlockNumberOrTag::Number(last_processed_block.into()));
Replace last_processed_block
with the actual variable that holds the last block number from the historical processing phase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (8)
packages/ciphernode/evm/src/ciphernode_registry_sol.rs (3)
Line range hint
71-89
: Enhance error logging and fix typo.Consider these improvements:
- Include error details in the log messages for better debugging
- Fix the typo in the trace message ("buut" → "but")
- error!("Error parsing event CiphernodeAdded after topic was matched!"); + error!(?data, "Failed to parse CiphernodeAdded event: invalid log data"); - error!("Error parsing event CiphernodeRemoved after topic was matched!"); + error!(?data, "Failed to parse CiphernodeRemoved event: invalid log data"); - "Unknown event was received by Enclave.sol parser buut was ignored" + "Unknown event was received by Enclave.sol parser but was ignored"
105-120
: Enhance logging for better observability.Consider adding more detailed logging to help with debugging and monitoring:
- Log the
start_block
when provided- Add debug logs for state tracking
- info!(address=%contract_address, "EnclaveSolReader is listening to address"); + info!( + address=%contract_address, + ?start_block, + "EnclaveSolReader started listening to address" + );
143-143
: Clarify the TODO comment about writer implementation.The TODO comment lacks context. If a writer implementation is planned, consider:
- Adding more details about requirements
- Creating a GitHub issue to track this enhancement
Would you like me to help create a GitHub issue to track the writer implementation?
packages/ciphernode/config/src/app_config.rs (4)
13-21
: Add documentation for the Contract enum.This public enum is a key part of the contract configuration. Please add documentation explaining its purpose and variants.
+/// Represents an Ethereum contract configuration. +/// - `Full`: Contains both the contract address and optional deployment block. +/// - `AddressOnly`: Contains just the contract address. #[derive(Debug, Deserialize, Serialize, PartialEq)] #[serde(untagged)] pub enum Contract { Full { address: String, deploy_block: Option<u64>, }, AddressOnly(String), }
32-38
: Optimize clone usage in deploy_block method.The
clone()
call is unnecessary sinceOption<u64>
implementsCopy
.pub fn deploy_block(&self) -> Option<u64> { use Contract::*; match self { - Full { deploy_block, .. } => deploy_block.clone(), + Full { deploy_block, .. } => *deploy_block, AddressOnly(_) => None, } }
43-45
: Add documentation for ContractAddresses fields.These public fields represent critical contract configurations. Please add documentation explaining their purpose.
pub struct ContractAddresses { + /// The main enclave contract address and optional deploy block pub enclave: Contract, + /// The ciphernode registry contract address and optional deploy block pub ciphernode_registry: Contract, + /// The filter registry contract address and optional deploy block pub filter_registry: Contract, }
Line range hint
328-352
: Add test cases for error scenarios.The current tests verify successful scenarios. Consider adding tests for:
- Invalid contract addresses
- YAML parsing errors for malformed contract configurations
Example test case:
#[test] fn test_invalid_contract_config() { Jail::expect_with(|jail| { // Setup similar to test_config() jail.create_file( filename, r#" chains: - name: "hardhat" contracts: enclave: address: "invalid_address" deploy_block: 123 "#, )?; let result = load_config(None); assert!(result.is_err()); Ok(()) }); }packages/ciphernode/evm/src/event_reader.rs (1)
193-210
: Enhance historical event error handling and logging.Consider adding:
- Block number information in logs for better debugging
- More granular error handling for historical events
match provider.clone().get_logs(&historical_filter).await { Ok(historical_logs) => { - info!("Fetched {} historical events", historical_logs.len()); + info!( + "Fetched {} historical events from block {} to latest", + historical_logs.len(), + start_block.unwrap_or(0) + ); for log in historical_logs { let block_number = log.block_number; if let Some(event) = extractor(log.data(), log.topic0(), chain_id) { - trace!("Processing historical log"); + trace!("Processing historical log from block {}", block_number); processor.do_send(EnclaveEvmEvent::new(event, block_number)); } } } Err(e) => { - warn!("Failed to fetch historical events: {}", e); - bus.err(EnclaveErrorType::Evm, anyhow!(e)); + let err = match e { + alloy::provider::ProviderError::RpcError(e) => + format!("RPC error fetching historical events: {}", e), + _ => format!("Failed to fetch historical events: {}", e), + }; + warn!("{}", err); + bus.err(EnclaveErrorType::Evm, anyhow!(err)); return; } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
packages/ciphernode/config/src/app_config.rs
(3 hunks)packages/ciphernode/enclave_node/src/aggregator.rs
(1 hunks)packages/ciphernode/enclave_node/src/ciphernode.rs
(1 hunks)packages/ciphernode/evm/src/ciphernode_registry_sol.rs
(2 hunks)packages/ciphernode/evm/src/enclave_sol.rs
(2 hunks)packages/ciphernode/evm/src/enclave_sol_reader.rs
(2 hunks)packages/ciphernode/evm/src/event_reader.rs
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/ciphernode/enclave_node/src/aggregator.rs
- packages/ciphernode/enclave_node/src/ciphernode.rs
- packages/ciphernode/evm/src/enclave_sol.rs
🧰 Additional context used
📓 Learnings (3)
packages/ciphernode/config/src/app_config.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#173
File: packages/ciphernode/config/src/app_config.rs:13-21
Timestamp: 2024-11-05T06:48:58.177Z
Learning: In the `packages/ciphernode/config/src/app_config.rs` file, for the `Contract` enum, the team prefers to use `String` type for `address` fields, relying on parsing to handle validation, instead of using the `Address` type.
packages/ciphernode/evm/src/enclave_sol_reader.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/evm/src/enclave_sol_reader.rs:87-89
Timestamp: 2024-10-22T03:39:29.448Z
Learning: In the `ciphernode` project, specifically in `packages/ciphernode/evm/src/enclave_sol_reader.rs`, the method `EnclaveSolReader::attach` should be retained even if it directly calls `EvmEventReader::attach` without additional processing. Avoid suggesting its removal in future reviews.
packages/ciphernode/evm/src/event_reader.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#173
File: packages/ciphernode/evm/src/event_reader.rs:270-286
Timestamp: 2024-11-05T06:51:46.701Z
Learning: In `packages/ciphernode/evm/src/event_reader.rs`, the `state.ids` HashSet is intended to grow indefinitely to store all processed event IDs for deduplication purposes until an alternative like a bloom filter is implemented.
🔇 Additional comments (9)
packages/ciphernode/evm/src/enclave_sol_reader.rs (3)
1-1
: LGTM: Import changes align with new functionality.
The added imports support the new repository pattern and enhanced logging capabilities.
Also applies to: 9-9, 11-11
89-92
: LGTM: Method signature enhances historical event processing.
The new parameters enable:
- State persistence through the repository
- Configurable starting point for event processing through
start_block
These changes align well with the PR objectives for handling historical events.
94-104
: LGTM: Implementation properly integrates new functionality.
The changes:
- Correctly pass all parameters to
EvmEventReader::attach
- Add helpful logging for operational visibility
Note: The implementation maintains the method's role as a thin wrapper around EvmEventReader::attach
, which aligns with the project's established patterns.
packages/ciphernode/evm/src/ciphernode_registry_sol.rs (2)
2-2
: LGTM: Import changes align with new functionality.
The new imports support the historical event processing and actor state management requirements.
Also applies to: 6-6, 13-13
105-141
: Verify historical event processing implementation.
The implementation looks correct, but let's verify:
- The repository usage for event caching
- The start_block handling in the event reader
✅ Verification successful
Based on the search results, I can now generate a final response as I have enough information to verify the historical event processing implementation:
Historical event processing implementation verified successfully
The implementation correctly handles historical events through:
EvmEventReaderState
struct stores event IDs and last processed block- Repository is used for caching via
inputs[e3Id]._insert(inputHash)
andinputCounts[e3Id]
tracking start_block
parameter is properly passed through the attach chain and used to initialize event processing
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the implementation of historical event processing
# Test 1: Check if repository is used for caching events
ast-grep --pattern 'repository.save($_)'
# Test 2: Verify start_block handling in EvmEventReader
ast-grep --pattern $'EvmEventReader::attach($_) {
$$$
if let Some(start_block) = $_ {
$$$
}
$$$
}'
Length of output: 154
Script:
#!/bin/bash
# Let's try a different approach to verify the implementation
# Check how repository and start_block are used in EvmEventReader
rg -A 10 "impl EvmEventReader"
# Look for repository usage patterns
rg -A 5 "Repository.*save"
# Check for start_block usage in event processing
rg -A 5 "start_block.*Some"
# Look for event caching related code
rg -A 5 "cache.*event"
Length of output: 527
Script:
#!/bin/bash
# Let's examine the EvmEventReader implementation more thoroughly
# Look for the EvmEventReader struct and implementation
rg -A 20 "struct EvmEventReader"
# Check the attach method implementation
rg -A 20 "fn attach.*EvmEventReader"
# Look for state handling in the repository
rg -A 10 "EvmEventReaderState"
# Check for block number processing
rg -A 10 "block.*number"
Length of output: 221879
packages/ciphernode/evm/src/event_reader.rs (4)
20-31
: LGTM! Well-structured event wrapper.
The EnclaveEvmEvent
struct effectively encapsulates the event data with its block number, providing a clean message type for the actor system.
37-54
: LGTM! Clean parameter and state management.
The EvmEventReaderParams
and EvmEventReaderState
structs provide:
- Clear separation of initialization parameters
- Proper state persistence with serialization support
- Effective tracking of processed events and block progress
263-286
: LGTM! Proper event deduplication and state management.
The implementation correctly:
- Deduplicates events using event IDs
- Tracks the last processed block
- Persists state through checkpointing
Note: As per the provided learning, the state.ids
HashSet is intentionally designed to grow indefinitely for deduplication purposes until an alternative like a bloom filter is implemented.
289-331
: LGTM! Clean snapshot and restoration implementation.
The snapshot and restoration logic is well-implemented with:
- Clear separation of snapshot and checkpoint traits
- Proper state restoration from snapshots
- Clean parameter handling for reconstruction
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (23)
tests/basic_integration/lib/cn2/config.yaml (1)
2-2
: Consider using a temporary directory for tests.While adding
data_dir
configuration aligns with the PR's objective of maintaining processed event state, using the current directory (.
) in tests might:
- Lead to test pollution if state persists between test runs
- Cause flaky tests due to leftover state
- Make it harder to clean up test artifacts
Consider using a temporary directory that's unique per test run:
-data_dir: . +data_dir: ${TEST_TMP_DIR:-/tmp/cn2-test}This ensures better test isolation and easier cleanup. You'll need to:
- Create this directory before tests
- Clean it up after tests
- Set TEST_TMP_DIR in your test environment
packages/ciphernode/enclave/src/commands/aggregator/start.rs (2)
19-19
: Consider enhancing the log messageWhile including the peer_id is valuable, the log message could be more descriptive to provide better context.
- info!("LAUNCHING AGGREGATOR {}", peer_id); + info!("Launching aggregator with peer ID: {}", peer_id);
Line range hint
16-24
: Consider adding telemetry for historical event processingGiven that this PR introduces historical event processing capabilities, consider adding additional logging or metrics to track:
- Historical event processing progress
- Block number tracking
- Event processing state transitions
This would improve observability of the historical event processing feature.
packages/ciphernode/enclave/src/commands/start.rs (1)
11-15
: Consider enhancing address validationThe current address validation could be improved to provide more specific error messages and handle edge cases.
- let Some(address) = config.address() else { - return Err(anyhow!("You must provide an address")); - }; + let address = config.address().ok_or_else(|| + anyhow!("Address configuration is required for EVM event processing"))?; + + // Additional validation could be added here + if !address.starts_with("0x") { + return Err(anyhow!("Invalid address format: must start with '0x'")); + }packages/ciphernode/evm/src/enclave_sol.rs (1)
23-31
: LGTM: Implementation maintains clear separation of concerns.The asymmetric implementation where only
EnclaveSolReader::attach
receives the new parameters is correct, as historical event processing is only relevant for reading events, not writing transactions.Consider documenting this architectural decision in the codebase to explain why
EnclaveSolWriter::attach
doesn't require the same parameters. This will help future maintainers understand the intentional asymmetry.packages/ciphernode/enclave_node/src/aggregator.rs (1)
70-76
: Consider implementing event caching mechanism.The changes correctly implement historical event processing. To optimize RPC node load, consider implementing:
- Event ID caching in the repository reader
- Batch processing for historical events
- Exponential backoff for RPC requests
packages/ciphernode/enclave/src/main.rs (3)
Line range hint
31-43
: Add help text for thetag
option.The new
tag
option lacks documentation that would help users understand its purpose and usage. Consider adding a help message using thehelp
attribute.#[arg(short, long, global = true)] +#[arg(help = "Optional tag to identify this instance (defaults to 'default')")] tag: Option<String>,
62-68
: Optimize theget_tag
implementation.The current implementation can be simplified and made more efficient by removing the unnecessary
clone()
and usingunwrap_or
.pub fn get_tag(&self) -> String { - if let Some(tag) = self.tag.clone() { - tag - } else { - "default".to_string() - } + self.tag.clone().unwrap_or_else(|| "default".to_string()) }
73-81
: Clean up commented filter configurations.Consider removing the commented-out filter configurations or moving them to documentation if they serve as examples.
packages/ciphernode/p2p/src/p2p.rs (2)
70-71
: Consider persisting the Ed25519 keypair.Currently, a new keypair is generated on each startup, which means the node's peer ID will change across restarts. This could impact peer discovery and long-term peer relationships.
Consider:
- Persisting the keypair to a secure storage
- Adding configuration options to load an existing keypair
- Implementing key rotation policies if needed
Would you like me to provide an example implementation for keypair persistence?
77-77
: Add documentation for the return tuple.The implementation is correct, but the return value structure should be documented to help API consumers understand the meaning of each tuple element.
Add a doc comment explaining the return value:
/// Spawns a new libp2p instance /// /// Returns a tuple containing: /// - Addr<Self>: The actor address /// - JoinHandle: The libp2p event loop handle /// - String: The node's peer ID derived from its public keypackages/ciphernode/evm/src/ciphernode_registry_sol.rs (1)
106-123
: Consider enhancing logging for historical event processing.While the basic logging is good, consider adding more detailed logging for historical event processing to aid in debugging and monitoring.
Add logging for the start block and tag:
-info!(address=%contract_address, "EnclaveSolReader is listening to address"); +info!( + address=%contract_address, + start_block=?start_block, + tag=%tag, + "EnclaveSolReader initialized with historical event processing" +);packages/ciphernode/p2p/src/libp2p_router.rs (4)
60-61
: Add documentation explaining the keypair cloning.The change to accept a reference is a good improvement as it provides more flexibility to the caller. However, it would be helpful to document why the keypair needs to be cloned.
Add documentation like this:
+ /// Sets the identity for the router by cloning the provided keypair. + /// The keypair is cloned to ensure the router has its own independent copy + /// of the identity, preventing any potential conflicts with other components + /// that might use the same keypair. pub fn with_identity(&mut self, keypair: &identity::Keypair) {
68-75
: Improve error handling and discovery type handling.The swarm creation logic has been improved, but there are a few areas that could be enhanced:
- The discovery_type parameter is only checked for "mdns" and otherwise logs a message, but still proceeds with mdns. Consider making this more explicit or removing the parameter if not needed.
- The match block could be simplified since it only has one meaningful arm.
Consider this refactoring:
- pub fn connect_swarm(&mut self, discovery_type: String) -> Result<&Self, Box<dyn Error>> { - match discovery_type.as_str() { - "mdns" => { + pub fn connect_swarm(&mut self) -> Result<&Self, Box<dyn Error>> { + if self.swarm.is_some() { + return Ok(self); + }
Line range hint
1-166
: Replace unwrap calls with proper error handling.There are several unwrap calls throughout the code (particularly in the start method) that could panic at runtime. Consider implementing proper error handling.
For example, in the start method:
- self.swarm - .as_mut() - .unwrap() + let swarm = self.swarm.as_mut() + .ok_or_else(|| io::Error::new( + io::ErrorKind::Other, + "Swarm not initialized. Call connect_swarm first." + ))?; + swarm
Based on the gathered information, I can now provide the final response:
Timeout configuration needs alignment across components and the builder pattern suggestion remains valid
- The 60-second idle connection timeout in the p2p router differs significantly from the 5-second timeout used consistently across all rendezvous components
- No documentation or configuration structure exists to explain this difference
- The builder pattern suggestion remains valid as the swarm configuration is complex
Recommendations:
- Consider aligning the timeout with other components (5 seconds) or document why 60 seconds is specifically needed for the p2p router
- Extract the swarm configuration into a dedicated builder struct for better maintainability
🔗 Analysis chain
Line range hint
69-97
: Consider using builder pattern for swarm configuration.The swarm configuration is quite complex with multiple chained calls. Consider extracting this into a builder struct to make it more maintainable and testable.
Additionally, verify that the idle connection timeout of 60 seconds is appropriate for your use case, especially considering the PR's focus on historical event processing.
Let me analyze the timeout settings across the codebase. I notice that while the main p2p router uses a 60-second timeout, all other components in the rendezvous package consistently use a 5-second timeout. Let me check if there's any documentation or configuration that explains this difference.
Let me check the configuration structures and any related documentation in a different way.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any configuration or documentation about timeout settings rg -g '!target' -g '!*.lock' -A 3 'idle.*timeout|connection.*timeout'Length of output: 2347
Script:
#!/bin/bash # Search for documentation about timeouts or configuration rg -g '!target' -g '!*.lock' -A 5 'timeout.*config|connection.*config|network.*config' README.md docs/ packages/ciphernode/p2p/ # Also check for any constants or configuration structs ast-grep --pattern 'const $TIMEOUT: Duration = Duration::from_secs($_);'Length of output: 250
Script:
#!/bin/bash # Look for configuration structs and their documentation ast-grep --pattern 'struct $CONFIG { $$$ }' # Search for any comments or documentation about network settings rg -g '!target' -g '!*.lock' -B 2 -A 2 'network|connection|timeout' packages/ciphernode/p2p/Length of output: 6117
tests/basic_integration/test.sh (1)
Line range hint
1-224
: Consider enhancing script robustness and documentation.A few suggestions to improve the script:
- Replace the hardcoded
sleep 4
with a more robust wait mechanism that checks for actual completion.- Add comments documenting the purpose and usage of the new
--tag
parameter.Example improvement for the sleep:
-sleep 4 # wait for input deadline to pass +# Wait for input deadline to pass (max 30 seconds) +deadline_passed=false +for i in {1..30}; do + if check_input_deadline_passed; then + deadline_passed=true + break + fi + sleep 1 +done +if ! $deadline_passed; then + echo "Timeout waiting for input deadline" + exit 1 +fiNote: You'll need to implement the
check_input_deadline_passed
function based on your specific requirements.packages/ciphernode/evm/tests/evm_reader.rs (4)
Line range hint
43-99
: Add repository state verification.While the test verifies event reception through the bus, it should also verify that events are properly stored in the repository to ensure persistence functionality.
Consider adding repository state verification after event processing:
assert_eq!(msgs, vec!["hello", "world!"]); + + // Verify repository state + let stored_events = repository.get_events("mytag").await?; + assert_eq!(stored_events.len(), 2); Ok(())
102-166
: Improve test readability with const declarations.The test effectively verifies historical and live event processing. However, the magic number in the assertion could be made more explicit.
Consider using const declarations:
+ const TOTAL_EXPECTED_EVENTS: usize = 8; // 4 historical + 4 live events + let history = bus.send(GetHistory).await?; - assert_eq!(history.len(), 8); + assert_eq!(history.len(), TOTAL_EXPECTED_EVENTS);
226-226
: Consider making sleep durations configurable.The hardcoded sleep durations of 1ms might lead to flaky tests on slower systems. Consider making these configurable or implementing a more robust synchronization mechanism.
+ const EVENT_PROCESSING_DELAY: Duration = Duration::from_millis(1); + - sleep(Duration::from_millis(1)).await; + sleep(EVENT_PROCESSING_DELAY).await;Also applies to: 238-238, 252-252, 268-268
254-257
: Reduce duplication in assertions.The test has multiple similar assertions that could be extracted into a helper function for better maintainability.
Consider adding a helper function:
async fn assert_messages(bus: &Addr<EventBus>, expected: &[&str]) -> Result<()> { let msgs = get_msgs(bus).await?; assert_eq!(msgs, expected); Ok(()) }Then use it in the test:
- let msgs = get_msgs(&bus).await?; - assert_eq!( - msgs, - ["before", "online", "live", "events", "these", "are", "not", "lost"] - ); + assert_messages(&bus, &["before", "online", "live", "events", "these", "are", "not", "lost"]).await?;Also applies to: 270-273
packages/ciphernode/evm/src/event_reader.rs (2)
32-34
: Optimize event ID generation to avoid cloning.The
get_id
method unnecessarily clones the entire event. Consider implementingHash
directly for the relevant fields instead of cloning the whole struct.- pub fn get_id(&self) -> EventId { - EventId::hash(self.clone()) + pub fn get_id(&self) -> EventId { + // Hash only the necessary fields + EventId::hash((&self.event, self.block)) }
133-142
: Enhance error handling in attach method.The error handling could be more descriptive when parsing the contract address.
- contract_address: contract_address.parse()?, + contract_address: contract_address.parse().map_err(|e| + anyhow!("Failed to parse contract address '{}': {}", contract_address, e))?,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (20)
packages/ciphernode/core/src/events.rs
(15 hunks)packages/ciphernode/enclave/src/commands/aggregator/mod.rs
(3 hunks)packages/ciphernode/enclave/src/commands/aggregator/start.rs
(1 hunks)packages/ciphernode/enclave/src/commands/mod.rs
(1 hunks)packages/ciphernode/enclave/src/commands/password/mod.rs
(2 hunks)packages/ciphernode/enclave/src/commands/start.rs
(1 hunks)packages/ciphernode/enclave/src/commands/wallet/mod.rs
(2 hunks)packages/ciphernode/enclave/src/main.rs
(3 hunks)packages/ciphernode/enclave_node/src/aggregator.rs
(4 hunks)packages/ciphernode/enclave_node/src/ciphernode.rs
(3 hunks)packages/ciphernode/evm/src/ciphernode_registry_sol.rs
(2 hunks)packages/ciphernode/evm/src/enclave_sol.rs
(2 hunks)packages/ciphernode/evm/src/enclave_sol_reader.rs
(2 hunks)packages/ciphernode/evm/src/event_reader.rs
(3 hunks)packages/ciphernode/evm/tests/evm_reader.rs
(3 hunks)packages/ciphernode/p2p/src/libp2p_router.rs
(2 hunks)packages/ciphernode/p2p/src/p2p.rs
(1 hunks)tests/basic_integration/lib/ag/config.yaml
(1 hunks)tests/basic_integration/lib/cn2/config.yaml
(1 hunks)tests/basic_integration/test.sh
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/ciphernode/enclave/src/commands/mod.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/ciphernode/core/src/events.rs
- packages/ciphernode/enclave_node/src/ciphernode.rs
🧰 Additional context used
📓 Learnings (3)
packages/ciphernode/evm/src/ciphernode_registry_sol.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/evm/src/enclave_sol_reader.rs:87-89
Timestamp: 2024-10-22T03:39:29.448Z
Learning: In the `ciphernode` project, specifically in `packages/ciphernode/evm/src/enclave_sol_reader.rs`, the method `EnclaveSolReader::attach` should be retained even if it directly calls `EvmEventReader::attach` without additional processing. Avoid suggesting its removal in future reviews.
packages/ciphernode/evm/src/enclave_sol.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/evm/src/enclave_sol_reader.rs:87-89
Timestamp: 2024-10-22T03:39:29.448Z
Learning: In the `ciphernode` project, specifically in `packages/ciphernode/evm/src/enclave_sol_reader.rs`, the method `EnclaveSolReader::attach` should be retained even if it directly calls `EvmEventReader::attach` without additional processing. Avoid suggesting its removal in future reviews.
packages/ciphernode/evm/src/enclave_sol_reader.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/evm/src/enclave_sol_reader.rs:87-89
Timestamp: 2024-10-22T03:39:29.448Z
Learning: In the `ciphernode` project, specifically in `packages/ciphernode/evm/src/enclave_sol_reader.rs`, the method `EnclaveSolReader::attach` should be retained even if it directly calls `EvmEventReader::attach` without additional processing. Avoid suggesting its removal in future reviews.
🔇 Additional comments (34)
packages/ciphernode/enclave/src/commands/aggregator/start.rs (2)
Line range hint 8-13
: LGTM: Function signature enhancement
The addition of the id
parameter enhances the aggregator's identification capabilities, which aligns well with the PR's objectives for better state management and event tracking.
16-17
: Verify consistent usage of setup_aggregator across the codebase
The setup_aggregator call has been updated to include the new id parameter. Let's verify that all other call sites have been updated accordingly.
✅ Verification successful
Let me check the implementation details of setup_aggregator
to verify the parameter consistency.
The setup_aggregator usage is consistent across the codebase
The verification shows that there is only one call site of setup_aggregator
in packages/ciphernode/enclave/src/commands/aggregator/start.rs
, and it correctly matches the function signature defined in packages/ciphernode/enclave_node/src/aggregator.rs
. Both the implementation and usage include all required parameters including the new id/tag
parameter.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all calls to setup_aggregator to ensure consistent parameter usage
rg -p "setup_aggregator\(" --type rust
Length of output: 276
Script:
#!/bin/bash
# Get the implementation and context of setup_aggregator function
rg -p "setup_aggregator" --type rust -B 5 -A 15
Length of output: 1661
packages/ciphernode/enclave/src/commands/start.rs (2)
8-8
: LGTM: Function signature enhancement
The addition of the id
parameter aligns with the PR objectives by providing a way to identify and track specific instances of event processing, which is crucial for managing historical event state.
16-17
: Verify peer_id usage across the codebase
The enhanced logging with both address
and peer_id
improves observability. However, we should ensure consistent usage of peer_id across the codebase.
✅ Verification successful
Let me search for more context about peer_id usage in the codebase.
Peer ID logging is consistent with the component's role and context
The peer_id logging across the codebase follows consistent patterns based on the component's role:
- Ciphernode logs both address and peer_id:
LAUNCHING CIPHERNODE: ({address}/{peer_id})
- Aggregator logs just peer_id:
LAUNCHING AGGREGATOR {peer_id}
- Rendezvous components log peer_id in connection events:
Connected to {peer_id}
- Discovery components include peer_id with address:
Discovered peer {peer} {address}
The current logging in the reviewed code follows this established pattern and provides appropriate context for the ciphernode component.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of peer_id in logging statements
# Expected: Find all logging statements with peer_id to ensure consistent format
# Search for logging statements containing peer_id
rg -A 2 'peer_id.*log'
# Search for similar logging patterns in other start-up related code
rg -A 2 'LAUNCHING.*\('
Length of output: 322
Script:
#!/bin/bash
# Search for all occurrences of peer_id in logging statements with broader context
rg -i "peer.?id" -A 3 -B 3
# Search specifically for info! and debug! log statements
rg "(info|debug|warn|error)!.*peer.?id" -A 2
# Look for struct or type definitions involving peer_id
ast-grep --pattern 'struct $_ {
$$$
peer_id: $_,
$$$
}'
Length of output: 16016
packages/ciphernode/enclave/src/commands/wallet/mod.rs (2)
Line range hint 6-14
: LGTM: Debug trait addition enhances debugging capabilities
Adding the Debug trait is a good practice that will improve error messages and debugging experience.
Line range hint 25-31
: Document and verify the purpose of the unused id
parameter
The id
parameter has been added to the function signature but is not used in the implementation. While this might be intentional as part of a broader interface change, please:
- Add documentation explaining the purpose and expected usage of this parameter
- Verify if the parameter should be used in the current implementation
Let's verify other command implementations to understand the usage pattern:
✅ Verification successful
Let me gather more context about the id parameter usage in the codebase:
Let me try a different approach to understand the CLI and command structure:
Let me try one final search to understand the id parameter's origin:
The id
parameter is correctly added but needs documentation
The id
parameter comes from the CLI's global --tag
option and is used for tracing/logging purposes and node identification in the P2P network setup. While the parameter is not directly used in the wallet
command implementation, it's part of a consistent interface across all command handlers.
- Add documentation to the
execute
function explaining that theid
parameter is used for node identification and logging context - Consider adding a doc comment like:
/// @param id Node identifier from CLI tag, used for logging and P2P identification
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how the id parameter is used in other command implementations
# Expected: Find patterns of id parameter usage across commands
# Search for execute function signatures with id parameter
rg -A 5 "fn execute\(.*id: &str.*\)"
# Search for actual usage of id parameter
rg "\bid\b.*=.*id" --type rust
Length of output: 3933
Script:
#!/bin/bash
# Search for the CLI implementation to understand where id comes from
rg -l "struct.*Commands" --type rust
# Then look for the CLI argument definition
rg "id.*flag|id.*arg" --type rust
Length of output: 74
Script:
#!/bin/bash
# Look for main CLI entry point and command structure
rg -A 10 "fn main" --type rust
# Look for where Commands are defined
fd -e rs . | xargs cat | grep -A 5 "enum.*Commands"
Length of output: 9987
Script:
#!/bin/bash
# Look for the Cli struct and its implementation
rg -A 10 "struct Cli" --type rust
# Look for tracing/logging related code since we see id in log contexts
rg "id.*=.*" --type rust packages/ciphernode/enclave/src/
Length of output: 1600
packages/ciphernode/enclave/src/commands/password/mod.rs (2)
8-8
: LGTM: Debug derive addition enhances observability
Adding the Debug trait is a good practice that will help with logging and error handling, particularly useful for tracking the state of password-related commands.
Line range hint 28-35
: Critical: Unused id parameter and incomplete command tracing
The newly added id
parameter is not utilized in the implementation or forwarded to sub-commands (create
, delete
, overwrite
). This could lead to traceability gaps in the system.
Consider forwarding the id to sub-commands for consistent command tracing:
pub async fn execute(command: PasswordCommands, config: AppConfig, id: &str) -> Result<()> {
match command {
- PasswordCommands::Create { password } => create::execute(&config, password).await?,
- PasswordCommands::Delete => delete::execute(&config).await?,
- PasswordCommands::Overwrite { password } => overwrite::execute(&config, password).await?,
+ PasswordCommands::Create { password } => create::execute(&config, password, id).await?,
+ PasswordCommands::Delete => delete::execute(&config, id).await?,
+ PasswordCommands::Overwrite { password } => overwrite::execute(&config, password, id).await?,
};
Ok(())
}
Let's verify if the sub-commands are prepared for this change:
packages/ciphernode/enclave/src/commands/aggregator/mod.rs (2)
6-6
: LGTM! Adding Debug derive enhances error reporting capabilities.
The addition of the Debug trait derivation for AggregatorCommands follows Rust best practices for improved error reporting and debugging.
Line range hint 20-34
: LGTM! The execute function changes are well-structured.
The addition of the id
parameter and its propagation to start::execute
is implemented correctly, maintaining proper error handling and async patterns. This change aligns with the standardization of command execution across the codebase.
packages/ciphernode/evm/src/enclave_sol.rs (2)
4-4
: LGTM: Import additions are appropriate.
The new imports for EvmEventReaderState
and Repository
are correctly added to support the historical event processing functionality.
Also applies to: 9-9
19-21
: LGTM: Method signature changes align with historical event processing requirements.
The new parameters enable:
- State persistence through
repository
- Historical event processing control via
start_block
- Event stream identification using
tag
Let's verify the consistent usage of these new parameters across the codebase:
✅ Verification successful
Method signature changes are consistently implemented across the codebase
The verification shows that the new parameters (repository
, start_block
, and tag
) are consistently used in all relevant event reader implementations:
EvmEventReader::attach
EnclaveSolReader::attach
CiphernodeRegistrySolReader::attach
The writer components (EnclaveSolWriter
, RegistryFilterSolWriter
) correctly maintain their original signatures without these parameters as they don't need event processing state.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all calls to EnclaveSol::attach include the new parameters
# Search for calls to EnclaveSol::attach
rg -A 5 "EnclaveSol::attach"
Length of output: 561
Script:
#!/bin/bash
# Let's check the actual implementation of attach method to verify parameter usage
ast-grep --pattern 'fn attach($$$) {
$$$
}'
Length of output: 47
Script:
#!/bin/bash
# Let's try a different approach to find the attach method implementation
rg -A 10 "fn attach" packages/ciphernode/evm/src/
Length of output: 7418
packages/ciphernode/enclave_node/src/aggregator.rs (4)
32-33
: LGTM: Function signature changes are well-structured.
The addition of the tag
parameter and inclusion of String
in the return type supports the PR's objective of better state management for event processing.
64-69
: LGTM: Appropriate minimal changes to RegistryFilterSol.
The changes correctly use the contract address while maintaining the simpler attachment pattern, as RegistryFilterSol doesn't require historical event processing.
88-88
: LGTM: P2p changes enhance node identification.
The inclusion of peer_id in the return value properly supports node identification and tracking in the P2p network.
Also applies to: 100-100
53-63
: Verify historical event processing configuration.
The changes to EnclaveSol::attach
properly implement historical event processing by:
- Using repository reader for state management
- Including deploy block for historical event boundaries
- Adding tag for identification
Let's verify the contract deployment blocks are properly configured:
✅ Verification successful
Let me gather more information about the contract configuration to ensure the deploy blocks are properly handled.
Let me try a different approach to check the attach implementation.
Based on the gathered information, I can now provide the final response:
Historical event processing configuration is properly implemented
The verification confirms that:
- Deploy blocks are correctly configured in the contract configuration (
app_config.rs
) - The
EnclaveSol::attach
method properly handles the deploy block parameter asstart_block
- The implementation correctly propagates deploy blocks through the event reader chain
The changes are safe and maintain proper historical event boundaries.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify contract deployment block configuration
# Test: Search for deploy_block configuration in chain configs
rg -A 2 "deploy_block"
Length of output: 2347
Script:
#!/bin/bash
# Check how deploy blocks are used in the attach methods
ast-grep --pattern 'fn attach($$$) {
$$$
}'
Length of output: 47
Script:
#!/bin/bash
# Search for attach method implementation
rg -A 10 "impl.*EnclaveSol"
Length of output: 5956
packages/ciphernode/enclave/src/main.rs (3)
1-8
: LGTM: Imports are appropriate for the new tracing functionality.
48-60
: LGTM: Command execution changes are consistent and well-structured.
The id parameter is correctly retrieved and passed to all command handlers.
83-86
: LGTM: Tracing implementation is well-structured.
The span creation and instrumentation provide good observability for the application.
packages/ciphernode/evm/src/enclave_sol_reader.rs (5)
1-1
: LGTM: Import changes align with new functionality.
The added imports support the new repository-based event state management and enhanced logging capabilities.
Also applies to: 9-9, 11-11
91-93
: LGTM: Signature changes support historical event processing.
The new parameters align with the PR objectives:
repository
enables event state persistencestart_block
supports historical event processingtag
improves observability
95-104
: LGTM: Clean pass-through implementation.
The implementation correctly forwards all parameters to EvmEventReader::attach
, maintaining the expected behavior while supporting the new functionality.
106-106
: LGTM: Improved observability with appropriate logging.
The added info log provides valuable runtime visibility into which contract addresses are being monitored.
91-107
: Verify integration with event processing system.
While the implementation looks good, let's verify the integration with the event processing system:
✅ Verification successful
Based on the shell script results, I can now generate the final response as the implementation shows proper integration with the event processing system:
Event processing integration is properly implemented
The implementation correctly integrates with the event processing system through:
- Proper state management via
Repository<EvmEventReaderState>
with snapshot/restore capabilities - Robust event handling with duplicate detection and error logging
- Clean shutdown mechanism through oneshot channels
- Proper event forwarding to the event bus with state tracking
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the integration of EnclaveSolReader with the event processing system
# Test 1: Check if EvmEventReader properly handles the repository parameter
echo "Checking EvmEventReader implementation..."
rg -U "impl.*EvmEventReader" -A 50
# Test 2: Verify state restoration logic
echo "Checking state restoration logic..."
ast-grep --pattern 'fn restore_state($$$) {
$$$
}'
# Test 3: Check for proper error handling in event processing
echo "Checking error handling in event processing..."
rg "error!.*event" -A 5
Length of output: 18858
packages/ciphernode/p2p/src/p2p.rs (2)
68-68
: LGTM: Method signature change is well-structured.
The addition of the String return type to include the peer ID enhances the API by providing necessary identification information for P2P networking.
68-77
: Verify relationship with historical EVM events processing.
The P2P changes appear tangential to the main PR objective of processing historical EVM events. Please clarify how these P2P enhancements support the historical event processing functionality.
✅ Verification successful
Let me gather more information about the P2P event propagation to understand if it's related to EVM event distribution.
Let me check one more aspect to understand how the P2P component interacts with the enclave node and EVM events.
Based on the evidence gathered from all the searches, I can now provide a final response:
P2P changes are correctly integrated with EVM event processing
The P2P module is a core component of the distributed event processing system, where:
- The P2P network (
enclave-keygen-01
topic) facilitates communication between enclave nodes - EVM event processing is distributed across nodes as shown by the integration in
enclave_node/src/ciphernode.rs
andenclave_node/src/aggregator.rs
- The P2P changes enhance the network layer that supports the historical EVM event processing
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for EVM event processing integration
echo "Searching for EVM event processing integration..."
rg -l "EvmEventReader|historical.*event" --type rust
# Search for P2P event handling related to EVM
echo "Searching for P2P event handling related to EVM..."
rg "EnclaveEvent.*Evm|Evm.*Event" --type rust
Length of output: 5578
Script:
#!/bin/bash
# Search for P2P message types and event handling
echo "Searching for P2P message types..."
rg "Message.*enum|struct.*Message" -A 10 --type rust packages/ciphernode/p2p/
# Search for P2P topic subscriptions
echo "Searching for P2P topic subscriptions..."
rg "join_topic|subscribe.*topic" -B 5 -A 5 --type rust
# Search for connections between P2P and EVM components
echo "Searching for P2P and EVM connections..."
rg "use.*evm|use.*p2p" --type rust packages/ciphernode/
Length of output: 3922
Script:
#!/bin/bash
# Search for P2P usage in enclave node
echo "Searching for P2P initialization and usage in enclave node..."
rg -A 10 "P2p::new|P2p::start" packages/ciphernode/enclave_node/
# Search for event handling in enclave node
echo "Searching for event handling in enclave node..."
rg "handle.*Event|process.*Event" -B 5 -A 10 packages/ciphernode/enclave_node/
# Check enclave node initialization
echo "Checking enclave node initialization..."
ast-grep --pattern 'struct CipherNode {
$$$
}'
Length of output: 525
packages/ciphernode/evm/src/ciphernode_registry_sol.rs (4)
2-2
: LGTM: Import changes align with new functionality.
The new imports support historical event processing and actor-based architecture:
EvmEventReaderState
for maintaining event processing stateActor
for actix framework integrationRepository
for state persistence
Also applies to: 6-6, 13-13
Line range hint 67-89
: LGTM: Improved error handling in event extraction.
The error handling is well-implemented with:
- Clear error messages using appropriate logging levels
- Proper use of the
else
syntax for error cases - Consistent error handling pattern across different event types
111-120
: Verify historical event processing order.
The code delegates event processing to EvmEventReader::attach
, but we should verify that historical events are processed before live events.
#!/bin/bash
# Search for the implementation of historical vs live event processing order
ast-grep --pattern 'impl EvmEventReader {
$$$
async fn attach($$$) {
$$$
}
$$$
}'
128-150
: Verify if Writer implementation is needed for historical events.
The TODO comment about the Writer needs clarification:
// TODO: Writer if needed
Consider whether a Writer implementation is necessary for handling historical events or if it's out of scope for this PR.
tests/basic_integration/test.sh (2)
94-94
: LGTM: Tag parameter addition enhances node identification.
The addition of the --tag
parameter with the node's name improves the ability to track and identify cipher nodes, which is particularly useful when processing historical events.
112-112
: LGTM: Consistent tag parameter implementation.
The addition of the --tag
parameter to the aggregator launch maintains consistency with the cipher node implementation and enhances process identification.
packages/ciphernode/evm/tests/evm_reader.rs (1)
23-40
: LGTM: Well-structured event extraction helper.
The test_event_extractor
function is well-implemented with proper error handling and type conversion. It cleanly separates the event extraction logic from the test cases.
packages/ciphernode/evm/src/event_reader.rs (1)
206-230
: Implementation successfully meets PR objectives.
The code effectively implements the required functionality:
- Processes historical events before subscribing to live events
- Maintains state of processed events
- Supports configurable start block
- Handles both historical and live events appropriately
The implementation looks solid and aligns well with the PR objectives.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 23
🧹 Outside diff range and nitpick comments (23)
tests/basic_integration/test.sh (1)
8-19
: Add script validation and documentation.While the case statement implementation is clean, consider these improvements for robustness:
- Add validation for script existence
- Include error handling for script execution
- Add documentation about the purpose of each script
Consider this enhancement:
# Source the file from the same directory +# persist.sh: Handles persistence testing scenarios +# base.sh: Handles basic integration testing scenarios + +validate_scripts() { + for script in "persist.sh" "base.sh"; do + if [ ! -f "$THIS_DIR/$script" ]; then + echo "Error: Required script $script not found in $THIS_DIR" + exit 1 + fi + done +} + +validate_scripts + case $1 in persist) "$THIS_DIR/persist.sh" ;; base) "$THIS_DIR/base.sh" ;; *) + if [ $# -gt 0 ]; then + echo "Invalid argument: $1" + echo "Usage: $0 [persist|base]" + exit 1 + fi "$THIS_DIR/persist.sh" "$THIS_DIR/base.sh" ;; esactests/basic_integration/lib/clean_folders.sh (1)
7-7
: Consider enhancing process cleanup robustness.For more robust process management, consider adding process termination verification:
- rm -rf "/tmp/enclave.*.pid" + # Ensure processes are terminated before removing PID files + for pid_file in /tmp/enclave.*.pid; do + if [ -f "$pid_file" ]; then + pid=$(cat "$pid_file") + if kill -0 "$pid" 2>/dev/null; then + echo "Warning: Process $pid is still running" + kill "$pid" 2>/dev/null || true + sleep 1 + fi + rm -f "$pid_file" + fi + donepackages/ciphernode/core/src/tag.rs (1)
9-12
: Improve error handling and documentation forget_tag
.The function implementation could be enhanced in several ways:
- The error message from
expect()
should provide more context about why the lock acquisition failed- Consider returning a reference with a guard for better performance in high-frequency scenarios
- Missing function documentation
Consider this improvement:
+/// Returns the current value of the global tag. +/// +/// # Panics +/// +/// Panics if another thread which holds the write lock has panicked while holding the lock. pub fn get_tag() -> String { - let tag_guard = TAG.read().expect("Failed to acquire read lock"); + let tag_guard = TAG.read().expect("Failed to acquire tag read lock - possible deadlock detected"); tag_guard.clone() }For better performance in high-frequency scenarios, consider this alternative implementation:
pub fn get_tag_ref() -> impl std::ops::Deref<Target = String> + '_ { TAG.read().expect("Failed to acquire tag read lock - possible deadlock detected") }packages/ciphernode/data/src/repository.rs (3)
Line range hint
7-19
: Add documentation for the generic type parameter.The
Repository<S>
struct lacks documentation explaining the purpose and constraints of the generic type parameterS
. This documentation would help users understand how to properly use this type.Add documentation like this:
+/// A generic repository that manages storage for a specific type. +/// +/// # Type Parameters +/// * `S` - The type of data this repository manages. This type should implement +/// necessary serialization traits when used with read/write operations. #[derive(Debug)] pub struct Repository<S> {
Line range hint
33-37
: Consider avoiding unnecessary cloning in From implementation.The
From<&Repository<T>>
implementation always clones the DataStore, which might be unnecessary in some use cases. Consider if this conversion is really needed, or if it could be replaced with a reference-based solution.Consider one of these alternatives:
- Remove this implementation if it's not strictly necessary
- Implement
AsRef<DataStore>
instead:impl<T> AsRef<DataStore> for Repository<T> { fn as_ref(&self) -> &DataStore { &self.store } }
Line range hint
47-59
: Improve documentation and trait bounds for read/write methods.While the implementation is correct, it would benefit from:
- Documentation explaining possible error cases
- Split trait bounds for better error messages
Consider this improvement:
-impl<T> Repository<T> -where - T: for<'de> serde::Deserialize<'de> + serde::Serialize, +/// Methods for reading and writing serializable data +impl<T> Repository<T> +where + T: for<'de> serde::Deserialize<'de>, + T: serde::Serialize, { + /// Reads the stored value, if any. + /// + /// # Errors + /// Returns an error if deserialization fails or if the underlying store + /// encounters an error. pub async fn read(&self) -> Result<Option<T>> { self.store.read().await } + /// Writes a value to the store. pub fn write(&self, value: &T) { self.store.write(value) } }packages/ciphernode/enclave_node/src/aggregator.rs (1)
53-73
: LGTM: Consistent historical event processing implementation.The contract attachment implementations correctly follow the architectural pattern for historical event processing:
- EnclaveSol and CiphernodeRegistrySol include repository readers and deploy blocks for historical event tracking
- RegistryFilterSol correctly omits these as per previous discussion
This implementation aligns well with the PR objectives for processing both historical and live events.
Consider documenting this architectural pattern in the codebase to explain why some contracts require historical event tracking while others don't.
packages/ciphernode/enclave/src/main.rs (3)
Line range hint
48-61
: Pass theid
to command execution functions.The
id
is retrieved but not utilized in the command executions. According to the AI summary, the commands have been updated to accept the tag parameter.Apply this diff to pass the tag:
match self.command { - Commands::Start => start::execute(config).await?, - Commands::Password { command } => password::execute(command, config).await?, - Commands::Aggregator { command } => aggregator::execute(command, config).await?, - Commands::Wallet { command } => wallet::execute(command, config).await?, + Commands::Start => start::execute(config, &id).await?, + Commands::Password { command } => password::execute(command, config, &id).await?, + Commands::Aggregator { command } => aggregator::execute(command, config, &id).await?, + Commands::Wallet { command } => wallet::execute(command, config, &id).await?, }
63-69
: Optimizeget_tag
implementation.The current implementation unnecessarily clones the tag and can be simplified using Option's methods.
Apply this diff for a more idiomatic implementation:
pub fn get_tag(&self) -> String { - if let Some(tag) = self.tag.clone() { - tag - } else { - "default".to_string() - } + self.tag.clone().unwrap_or_else(|| "default".to_string()) }
74-82
: Clean up commented tracing filters.Consider removing or documenting the commented-out filter configurations. If they serve as examples, consider moving them to documentation.
tests/basic_integration/fns.sh (1)
15-32
: Consider exporting environment variables.Several environment variables appear unused within this script but might be used by external processes. If these variables are meant to be used by child processes, they should be explicitly exported.
-# Environment variables -RPC_URL="ws://localhost:8545" +# Environment variables for child processes +export RPC_URL="ws://localhost:8545" -PRIVATE_KEY="0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80" -CIPHERNODE_SECRET="We are the music makers and we are the dreamers of the dreams." +export PRIVATE_KEY="0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80" +export CIPHERNODE_SECRET="We are the music makers and we are the dreamers of the dreams." # These contracts are based on the deterministic order of hardhat deploy # We _may_ wish to get these off the hardhat environment somehow? -ENCLAVE_CONTRACT="0x9fE46736679d2D9a65F0992F2272dE9f3c7fa6e0" -REGISTRY_CONTRACT="0xCf7Ed3AccA5a467e9e704C703E8D87F634fB0Fc9" -REGISTRY_FILTER_CONTRACT="0xDc64a140Aa3E981100a9becA4E685f962f0cF6C9" -INPUT_VALIDATOR_CONTRACT="0x8A791620dd6260079BF849Dc5567aDC3F2FdC318" +export ENCLAVE_CONTRACT="0x9fE46736679d2D9a65F0992F2272dE9f3c7fa6e0" +export REGISTRY_CONTRACT="0xCf7Ed3AccA5a467e9e704C703E8D87F634fB0Fc9" +export REGISTRY_FILTER_CONTRACT="0xDc64a140Aa3E981100a9becA4E685f962f0cF6C9" +export INPUT_VALIDATOR_CONTRACT="0x8A791620dd6260079BF849Dc5567aDC3F2FdC318" # These are random addresses for now -CIPHERNODE_ADDRESS_1="0x2546BcD3c84621e976D8185a91A922aE77ECEc30" -CIPHERNODE_ADDRESS_2="0xbDA5747bFD65F08deb54cb465eB87D40e51B197E" -CIPHERNODE_ADDRESS_3="0xdD2FD4581271e230360230F9337D5c0430Bf44C0" -CIPHERNODE_ADDRESS_4="0x8626f6940E2eb28930eFb4CeF49B2d1F2C9C1199" +export CIPHERNODE_ADDRESS_1="0x2546BcD3c84621e976D8185a91A922aE77ECEc30" +export CIPHERNODE_ADDRESS_2="0xbDA5747bFD65F08deb54cb465eB87D40e51B197E" +export CIPHERNODE_ADDRESS_3="0xdD2FD4581271e230360230F9337D5c0430Bf44C0" +export CIPHERNODE_ADDRESS_4="0x8626f6940E2eb28930eFb4CeF49B2d1F2C9C1199"🧰 Tools
🪛 Shellcheck
[warning] 16-16: RPC_URL appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 18-18: PRIVATE_KEY appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 19-19: CIPHERNODE_SECRET appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 23-23: ENCLAVE_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 24-24: REGISTRY_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 25-25: REGISTRY_FILTER_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 26-26: INPUT_VALIDATOR_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 29-29: CIPHERNODE_ADDRESS_1 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 30-30: CIPHERNODE_ADDRESS_2 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 31-31: CIPHERNODE_ADDRESS_3 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 32-32: CIPHERNODE_ADDRESS_4 appears unused. Verify use (or export if used externally).
(SC2034)
🪛 Gitleaks
18-18: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
tests/basic_integration/base.sh (3)
3-4
: Enhance error handling with additional shell options.Add
-o pipefail
to catch errors in pipeline commands.-set -eu # Exit immediately if a command exits with a non-zero status +set -euo pipefail # Exit on error, undefined vars, and pipeline failures
66-66
: Replace hardcoded sleep with dynamic waiting.Using a fixed sleep duration might be unreliable.
-sleep 4 # wait for input deadline to pass +# Wait for input deadline with verification +MAX_ATTEMPTS=40 +ATTEMPT=0 +while [[ $ATTEMPT -lt $MAX_ATTEMPTS ]]; do + if yarn e3:status --network localhost --e3-id 0 | grep -q "INPUT_DEADLINE_PASSED"; then + break + fi + sleep 1 + ((ATTEMPT++)) +done + +if [[ $ATTEMPT -eq $MAX_ATTEMPTS ]]; then + echo "Error: Input deadline did not pass within expected time" + cleanup 1 +fi
1-2
: Add documentation and logging mechanism.The script would benefit from proper documentation and structured logging.
#!/usr/bin/env bash +# Integration test script for EVM and cipher nodes +# +# This script performs end-to-end testing of: +# - EVM node setup and operation +# - Cipher node deployment and configuration +# - Committee creation and key generation +# - Mock encryption and event publishing +# - Result verification +# +# Usage: ./base.sh +# Environment variables required: +# - CIPHERNODE_SECRET: Secret for cipher nodes +# - PRIVATE_KEY: Private key for operations +# - CIPHERNODE_ADDRESS_[1-4]: Addresses for cipher nodes + +# Configure logging +LOG_FILE="integration_test.log" +log() { + local level=$1 + shift + echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*" | tee -a "$LOG_FILE" +}tests/basic_integration/persist.sh (1)
1-10
: Enhance script robustness and documentation.Consider the following improvements:
- Add
set -o pipefail
to catch errors in pipelines- Add script documentation explaining purpose, dependencies, and usage
#!/usr/bin/env bash +# Script: persist.sh +# Purpose: Integration test for EVM event persistence and cipher node functionality +# Dependencies: +# - fns.sh: Common functions +# - yarn: Package manager +# - curl: HTTP client + -set -eu # Exit immediately if a command exits with a non-zero status +set -euo pipefail # Strict error handling including pipeline failurespackages/ciphernode/evm/src/ciphernode_registry_sol.rs (2)
Line range hint
74-89
: Enhance error logging and fix typo.Consider these improvements:
- Include the actual parsing error in the error logs for better debugging
- Fix the typo in the trace message ("buut" → "but")
- error!("Error parsing event CiphernodeAdded after topic was matched!"); + error!("Failed to parse CiphernodeAdded event: {:?}", err); - "Unknown event was received by Enclave.sol parser buut was ignored" + "Unknown event was received by Enclave.sol parser but was ignored"
104-121
: Add logging for start_block parameter.Consider logging the start_block value to track from which block the historical events are being processed.
- info!(address=%contract_address, "EnclaveSolReader is listening to address"); + info!( + address=%contract_address, + start_block=?start_block, + "EnclaveSolReader is listening to address from block {:?}", + start_block + );packages/ciphernode/evm/tests/evm_reader.rs (3)
100-164
: Consider adding error case tests for historical events.While the happy path is well tested, consider adding test cases for:
- Network interruptions during historical event fetching
- Invalid historical events
- Duplicate events in historical data
166-274
: Enhance state verification in resumption test.The test effectively verifies event processing after shutdown. Consider enhancing it by:
- Explicitly verifying the repository state after shutdown
- Adding assertions for the last processed block number
- Testing resumption with different repository states
Example assertion to add:
// After line 225 (shutdown) let state = repository.load().await?; assert_eq!(state.last_processed_block, expected_block_number);
Test coverage needs improvement for error handling and edge cases
Based on the implementation analysis, while the current tests cover the main functionality well (historical events, resumption after shutdown), there are several untested error scenarios and edge cases:
- Error handling for provider initialization failures
- Error handling for invalid contract addresses
- Edge case when provider is already used (as seen in
started()
method)- Error handling for event processing failures (as seen in error handlers using
bus.err
)- Edge case for duplicate event handling (in
handle()
method forEnclaveEvmEvent
)The test file should be expanded to include these scenarios for comprehensive coverage.
🔗 Analysis chain
Line range hint
1-274
: Verify test coverage metrics for new functionality.The test file provides good coverage of the main functionality, particularly for historical events and resumption. However, let's verify the overall test coverage:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check test coverage for EvmEventReader rg "fn (attach|load|process_events)" packages/ciphernode/evm/src/ -A 5 rg "#\[test\]" packages/ciphernode/evm/tests/ -B 1 -A 1Length of output: 5046
Script:
#!/bin/bash # Let's check for all test functions in the evm module rg "fn.*\(\).*->.*Result" packages/ciphernode/evm/tests/ -B 1 # Check for the implementation of process_events which wasn't found in previous search ast-grep --pattern 'async fn process_events($$$) { $$$ }'Length of output: 586
Script:
#!/bin/bash # Let's check the implementation of EvmEventReader to understand all methods that need testing rg "impl.*EvmEventReader" packages/ciphernode/evm/src/ -A 50 # Also check for any error handling or edge cases in the implementation rg "Err\(|bail!|anyhow!" packages/ciphernode/evm/src/event_reader.rsLength of output: 15087
packages/ciphernode/evm/src/event_reader.rs (1)
109-118
: Enhance error context in state loading.While the error handling is present, adding context would make debugging easier.
Consider enhancing error context:
pub async fn load(params: EvmEventReaderParams<P, T>) -> Result<Self> { let id = get_tag(); let span = info_span!("evm_event_reader", %id); let _guard = span.enter(); - Ok(if let Some(snapshot) = params.repository.read().await? { + Ok(if let Some(snapshot) = params.repository.read().await.context("Failed to read snapshot")? { info!("Loading from snapshot"); - Self::from_snapshot(params, snapshot).await? + Self::from_snapshot(params, snapshot).await.context("Failed to restore from snapshot")? } else { info!("Loading from params"); Self::new(params) })packages/ciphernode/data/src/sled_store.rs (2)
60-72
: Clarify return value whendb
is dropped inGet
handlerWhen
db
isNone
, theGet
handler logs an error and returnsNone
. This could be confused with a missing key. Consider returning an error or a distinct result to differentiate between a dropped database and a missing key.Apply this diff to return a
Result
indicating the error:- type Result = Option<Vec<u8>>; + type Result = Result<Option<Vec<u8>>, EnclaveError>; fn handle(&mut self, event: Get, _: &mut Self::Context) -> Self::Result { if let Some(ref mut db) = &mut self.db { return db.get(event).map_err(|err| { self.bus.err(EnclaveErrorType::Data, err); EnclaveError::DataError }); } else { error!("Attempt to get data from dropped db"); - None + Err(EnclaveError::DatabaseDropped) } }This change provides clearer feedback to the caller about the nature of the failure.
75-81
: Implement graceful shutdown to process pending messagesStopping the actor immediately upon receiving a shutdown event might leave pending messages unprocessed. Consider implementing a graceful shutdown by draining the mailbox before stopping to ensure all messages are handled.
You can modify the shutdown handling as follows:
fn handle(&mut self, msg: EnclaveEvent, ctx: &mut Self::Context) -> Self::Result { if let EnclaveEvent::Shutdown { .. } = msg { + ctx.run_later(Duration::from_secs(1), |actor, ctx| { + ctx.stop(); + }); let _db = self.db.take(); // db will be dropped - ctx.stop() } }This allows the actor to process any remaining messages before shutting down.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
packages/ciphernode/Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (28)
packages/ciphernode/Cargo.toml
(1 hunks)packages/ciphernode/core/Cargo.toml
(1 hunks)packages/ciphernode/core/src/lib.rs
(1 hunks)packages/ciphernode/core/src/tag.rs
(1 hunks)packages/ciphernode/data/src/data_store.rs
(1 hunks)packages/ciphernode/data/src/repository.rs
(1 hunks)packages/ciphernode/data/src/sled_store.rs
(4 hunks)packages/ciphernode/enclave/src/commands/aggregator/mod.rs
(1 hunks)packages/ciphernode/enclave/src/commands/aggregator/start.rs
(1 hunks)packages/ciphernode/enclave/src/commands/password/mod.rs
(1 hunks)packages/ciphernode/enclave/src/commands/start.rs
(1 hunks)packages/ciphernode/enclave/src/commands/wallet/mod.rs
(1 hunks)packages/ciphernode/enclave/src/main.rs
(4 hunks)packages/ciphernode/enclave_node/src/aggregator.rs
(4 hunks)packages/ciphernode/enclave_node/src/ciphernode.rs
(4 hunks)packages/ciphernode/enclave_node/src/datastore.rs
(1 hunks)packages/ciphernode/evm/src/ciphernode_registry_sol.rs
(2 hunks)packages/ciphernode/evm/src/enclave_sol.rs
(2 hunks)packages/ciphernode/evm/src/enclave_sol_reader.rs
(2 hunks)packages/ciphernode/evm/src/event_reader.rs
(3 hunks)packages/ciphernode/evm/tests/evm_reader.rs
(3 hunks)packages/ciphernode/sortition/src/sortition.rs
(7 hunks)packages/ciphernode/tests/tests/test_aggregation_and_decryption.rs
(1 hunks)tests/basic_integration/base.sh
(1 hunks)tests/basic_integration/fns.sh
(1 hunks)tests/basic_integration/lib/clean_folders.sh
(1 hunks)tests/basic_integration/persist.sh
(1 hunks)tests/basic_integration/test.sh
(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- packages/ciphernode/core/src/lib.rs
- packages/ciphernode/data/src/data_store.rs
🚧 Files skipped from review as they are similar to previous changes (8)
- packages/ciphernode/enclave/src/commands/aggregator/mod.rs
- packages/ciphernode/enclave/src/commands/aggregator/start.rs
- packages/ciphernode/enclave/src/commands/password/mod.rs
- packages/ciphernode/enclave/src/commands/start.rs
- packages/ciphernode/enclave/src/commands/wallet/mod.rs
- packages/ciphernode/enclave_node/src/datastore.rs
- packages/ciphernode/evm/src/enclave_sol.rs
- packages/ciphernode/sortition/src/sortition.rs
🧰 Additional context used
📓 Learnings (3)
packages/ciphernode/evm/src/ciphernode_registry_sol.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/evm/src/enclave_sol_reader.rs:87-89
Timestamp: 2024-10-22T03:39:29.448Z
Learning: In the `ciphernode` project, specifically in `packages/ciphernode/evm/src/enclave_sol_reader.rs`, the method `EnclaveSolReader::attach` should be retained even if it directly calls `EvmEventReader::attach` without additional processing. Avoid suggesting its removal in future reviews.
packages/ciphernode/evm/src/enclave_sol_reader.rs (1)
Learnt from: ryardley
PR: gnosisguild/enclave#145
File: packages/ciphernode/evm/src/enclave_sol_reader.rs:87-89
Timestamp: 2024-10-22T03:39:29.448Z
Learning: In the `ciphernode` project, specifically in `packages/ciphernode/evm/src/enclave_sol_reader.rs`, the method `EnclaveSolReader::attach` should be retained even if it directly calls `EvmEventReader::attach` without additional processing. Avoid suggesting its removal in future reviews.
tests/basic_integration/test.sh (1)
Learnt from: ryardley
PR: gnosisguild/enclave#107
File: tests/basic_integration/test.sh:103-114
Timestamp: 2024-10-08T19:45:18.209Z
Learning: In `tests/basic_integration/test.sh`, the user prefers not to refactor the ciphernode addition section to reduce duplication.
🪛 Shellcheck
tests/basic_integration/fns.sh
[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.
(SC2148)
[warning] 5-5: PLAINTEXT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 16-16: RPC_URL appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 18-18: PRIVATE_KEY appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 19-19: CIPHERNODE_SECRET appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 23-23: ENCLAVE_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 24-24: REGISTRY_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 25-25: REGISTRY_FILTER_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 26-26: INPUT_VALIDATOR_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 29-29: CIPHERNODE_ADDRESS_1 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 30-30: CIPHERNODE_ADDRESS_2 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 31-31: CIPHERNODE_ADDRESS_3 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 32-32: CIPHERNODE_ADDRESS_4 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 38-38: Quote this to prevent word splitting.
(SC2046)
[warning] 60-60: Declare and assign separately to avoid masking return values.
(SC2155)
[error] 73-73: Argument mixes string and array. Use * or separate argument.
(SC2145)
[warning] 120-120: Declare and assign separately to avoid masking return values.
(SC2155)
🪛 Gitleaks
tests/basic_integration/fns.sh
18-18: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🔇 Additional comments (35)
tests/basic_integration/test.sh (2)
5-6
: LGTM! Robust directory path resolution.
The use of BASH_SOURCE ensures reliable script location resolution regardless of where the script is called from.
Line range hint 1-19
: Verify the integration with removed functions.
The AI summary indicates significant refactoring with removal of several functions. Let's verify that all functionality has been properly migrated to the new scripts.
✅ Verification successful
Let me try another verification approach since the previous script had incorrect paths.
Let me check one final thing to verify where these functions are defined.
All functions are properly migrated and available in fns.sh
The verification shows that all the functions mentioned in the review comment (cleanup
, heading
, waiton
, waiton-files
, set_password
, launch_ciphernode
, set_private_key
, launch_aggregator
) are properly defined in tests/basic_integration/fns.sh
and are being used by both base.sh
and persist.sh
through the source "$THIS_DIR/fns.sh"
statement. The functionality has been correctly modularized rather than removed, improving code organization and reusability.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that removed functions are present in new scripts
# Test: Search for previous function names in new scripts
echo "Checking for previous functions in persist.sh and base.sh..."
for func in "cleanup" "heading" "waiton" "waiton-files" "set_password" "launch_ciphernode" "set_private_key" "launch_aggregator"; do
echo "Searching for $func..."
rg -l "$func" "$THIS_DIR/persist.sh" "$THIS_DIR/base.sh" || echo "Function $func not found"
done
Length of output: 3443
Script:
#!/bin/bash
# First locate the actual script files
echo "Locating test scripts..."
fd -t f "persist.sh|base.sh" tests/
echo -e "\nChecking content of found scripts..."
for script in $(fd -t f "persist.sh|base.sh" tests/); do
echo -e "\nContent of $script:"
cat "$script"
done
echo -e "\nSearching for function references..."
for func in "cleanup" "heading" "waiton" "waiton-files" "set_password" "launch_ciphernode" "set_private_key" "launch_aggregator"; do
echo -e "\nSearching for $func..."
rg "$func" tests/ || echo "Function $func not found"
done
Length of output: 15216
Script:
#!/bin/bash
echo "Checking content of fns.sh:"
cat tests/basic_integration/fns.sh
Length of output: 3600
tests/basic_integration/lib/clean_folders.sh (1)
7-7
: LGTM! Consider verifying process termination.
The addition of PID file cleanup is necessary for proper test environment cleanup. However, it would be safer to verify that the associated processes are terminated before removing their PID files.
Let's check if there are any potential process termination patterns in the codebase:
packages/ciphernode/core/src/tag.rs (1)
1-22
: Verify thread-safety implications for historical event processing.
Given that this tag mechanism will be used in the context of processing both historical and live EVM events, we should verify:
- How tag changes are coordinated between historical and live event processing
- Whether there are any race conditions possible during the transition from historical to live events
- If the tag state needs to be persisted or restored during node restarts
Let's check how this tag is used in event processing:
✅ Verification successful
Based on the search results, I can now provide a final response:
Tag mechanism is safely implemented for its intended use case
The code search reveals that the tag is primarily used for logging/tracing context and identification purposes, not for critical event processing logic:
- In
sortition.rs
andevent_reader.rs
: Used only for creating logging spans - In
main.rs
: Set once during startup from CLI arguments - No concurrent modifications found during event processing
- No persistence requirements identified as it's used for runtime identification only
The implementation using RwLock
is appropriate for this use case as:
- Reads are non-blocking between threads
- Write operations only occur during initial setup
- The tag value doesn't influence event processing logic or state transitions
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for tag usage in event processing context
echo "Searching for tag usage in event processing..."
rg -A 5 "get_tag|set_tag"
# Search for concurrent processing patterns
echo "Checking for concurrent processing patterns..."
ast-grep --pattern 'async fn $$$process$$$($$$) {
$$$
get_tag()
$$$
}'
Length of output: 5923
packages/ciphernode/core/Cargo.toml (1)
22-22
: LGTM: Addition of lazy_static aligns with event processing requirements.
The addition of the lazy_static
dependency is appropriate for managing the state of processed EVM events and block numbers, which is a key requirement for implementing historical event processing as outlined in the PR objectives.
packages/ciphernode/data/src/repository.rs (1)
Line range hint 1-59
: Verify error handling for network-related storage operations.
Given this repository's role in storing EVM event processing state, we should ensure robust error handling for network-related storage operations.
✅ Verification successful
Based on the search results, I can see comprehensive error handling patterns throughout the codebase, particularly in the repository and storage operations:
- The Repository implementation uses anyhow::Result for error propagation
- Network-related storage operations have proper error handling:
- DataStore operations propagate errors via Result
- Repository read/write operations properly handle and propagate errors
- Error events are published to the event bus for monitoring
- The codebase has a robust error handling architecture:
- Dedicated EnclaveErrorType enum for different error categories
- BusError trait for standardized error reporting
- Error aggregation and monitoring via GetErrors message
Repository implementation has robust error handling for storage operations.
The error handling implementation is comprehensive and follows Rust best practices for propagating and handling errors in network storage operations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for error handling patterns in related EVM event processing code
rg -A 5 "Result|Error|anyhow|thiserror" --type rust | grep -v "^binary file"
# Look for retry mechanisms in storage operations
ast-grep --pattern 'try { $$$ } catch { $$$ }'
Length of output: 168988
packages/ciphernode/Cargo.toml (1)
47-47
: Verify the usage pattern of lazy_static.
The addition of lazy_static
is appropriate for managing global state. However, since global state can impact testability and thread safety, please ensure:
- Its usage is documented in the
tag
module - Thread safety is properly handled with the RwLock
- Consider if the global state could be passed as a parameter instead
Let's verify the usage pattern:
✅ Verification successful
The lazy_static usage is well-implemented and thread-safe
The implementation shows proper usage of lazy_static with RwLock:
- The global TAG is correctly protected with RwLock for thread-safe access
- get_tag() safely acquires read locks and handles errors
- set_tag() properly manages write locks with error handling
- Usage is limited to logging/tracing contexts for ID tracking across components
The pattern is appropriate for its use case of maintaining a consistent identifier across different parts of the system for logging purposes.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how lazy_static is being used in the codebase
# Test 1: Find all lazy_static usage
echo "Checking lazy_static usage patterns:"
rg "lazy_static!" -A 5
# Test 2: Verify thread safety with RwLock
echo "Verifying RwLock usage with lazy_static:"
rg "RwLock.*lazy_static!" -A 5
Length of output: 633
Script:
#!/bin/bash
# Let's analyze the tag module more thoroughly
# Check the complete tag.rs implementation
echo "Complete tag.rs implementation:"
cat packages/ciphernode/core/src/tag.rs
# Check where this tag is being used
echo -e "\nUsage of tag module:"
rg "use.*tag"
# Check direct function calls
echo -e "\nDirect usage of tag functions:"
rg "get_tag|set_tag" -A 2
Length of output: 3902
packages/ciphernode/enclave_node/src/ciphernode.rs (3)
27-27
: LGTM: Return type enhancement for P2P integration
The addition of String
to the return tuple properly exposes the peer_id, which is essential for P2P networking functionality.
37-38
: LGTM: Repository and sortition setup
The repository initialization and sortition attachment are properly implemented with appropriate error handling.
72-72
: LGTM: P2P integration and return value handling
The P2P setup and return value changes are well-implemented:
- Proper error handling with descriptive message
- Correct extraction and return of peer_id
- Maintains backward compatibility with existing bus and join_handle returns
Also applies to: 77-77
packages/ciphernode/enclave_node/src/aggregator.rs (3)
32-32
: LGTM: Return type update for peer_id.
The function signature update to include peer_id in the return tuple is well-structured and type-safe.
85-85
: LGTM: Proper peer_id handling.
The P2P spawn and return value changes correctly propagate the peer_id through the setup process.
Also applies to: 97-97
39-39
: Verify other async attachments in the codebase.
The change to await Sortition attachment is correct. Let's verify that all other attachment calls in the codebase are properly awaited.
✅ Verification successful
The previous ast-grep pattern might have failed due to syntax. Let's try a different approach to find attachment patterns and their usage context in the codebase.
All attachment calls are properly handled
Based on the search results, I can verify that all attachment patterns in the codebase follow the correct async/await pattern where needed:
-
Async attachments are properly awaited:
Sortition::attach()
is awaited in all usagesEnclaveSol::attach()
is awaitedRegistryFilterSol::attach()
is awaitedEnclaveSolReader::attach()
is awaitedCiphernodeRegistrySol::attach()
is awaited
-
Synchronous attachments are correctly handled without await:
CiphernodeSelector::attach()
SimpleLogger::attach()
PublicKeyWriter::attach()
PlaintextWriter::attach()
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find any non-awaited attach calls that might need similar updates
ast-grep --pattern 'attach($$$).await'
Length of output: 40
Script:
#!/bin/bash
# Let's try a broader search for attach patterns first
rg "attach\(" -A 2
# Also search for Sortition usage patterns
rg "Sortition::" -A 2
Length of output: 11122
packages/ciphernode/enclave/src/main.rs (4)
1-2
: LGTM! Imports are appropriate for the new functionality.
The new imports support both the tag feature and enhanced logging capabilities.
Also applies to: 7-9
Line range hint 32-44
: LGTM! CLI struct changes are well-structured.
The new tag
field is properly implemented with:
- Global scope for all subcommands
- Both short and long flag options
- Optional nature with proper type
Line range hint 84-94
: LGTM! Excellent tracing implementation.
The span creation and instrumentation is well-implemented with:
- Proper tag propagation
- Structured logging with span context
- Robust error handling
Line range hint 1-94
: Verify relationship with PR objectives.
The changes in this file focus on CLI and logging enhancements, while the PR objectives primarily concern historical EVM events processing. Could you clarify how these changes support the historical events functionality?
✅ Verification successful
Based on the gathered evidence, I can now generate the final response:
CLI changes support EVM event processing through tag-based identification
The changes in this file are directly related to the PR objectives. The tag feature (get_tag
/set_tag
) is actively used in the EVM event processing system as shown in packages/ciphernode/evm/src/event_reader.rs
. It provides instance identification for EVM event readers and logging, which is essential for tracking historical events across different nodes. The CLI enhancements enable proper configuration of these tagged instances through the aggregator commands.
- The tag is used in
event_reader.rs
for identifying EVM event processing instances:let id = get_tag()
- The logging enhancements are specifically structured to support multi-instance event tracking with tags
- The aggregator commands are part of the event processing infrastructure, managing the nodes that handle these events
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for EVM event-related code that might use the new tag feature
rg -l "EvmEventReader|historical.*event"
Length of output: 360
Script:
#!/bin/bash
# Check if the tag feature is used in EVM event processing
rg -A 5 "get_tag|set_tag" packages/ciphernode/evm/
# Check if the CLI commands relate to EVM functionality
ast-grep --pattern 'enum Commands {
$$$
}'
# Look for connections between CLI and EVM events in the aggregator command
rg -A 10 "Commands::Aggregator" packages/ciphernode/enclave/
Length of output: 2924
Script:
#!/bin/bash
# Check the aggregator command implementation for EVM event processing
rg -A 10 "fn execute" packages/ciphernode/enclave/src/commands/aggregator.rs
# Look for the Commands enum definition to see all available commands
rg -A 10 "pub enum Commands" packages/ciphernode/enclave/src/commands/
Length of output: 1010
Script:
#!/bin/bash
# Check the AggregatorCommands enum definition
rg -A 10 "pub enum AggregatorCommands" packages/ciphernode/enclave/src/commands/
# Look for the actual aggregator command implementation
fd -t f aggregator.rs
# Check if the tag feature is used in historical event processing
rg -B 5 -A 10 "historical.*event.*process"
Length of output: 1394
packages/ciphernode/evm/src/enclave_sol_reader.rs (4)
1-1
: LGTM: Import changes align with new functionality.
The added imports support the new repository parameter and enhanced logging capabilities.
Also applies to: 9-9, 11-11
89-93
: LGTM: Method signature changes support historical event processing.
The new parameters enable:
- State persistence through the repository
- Configurable starting point for historical event processing
94-105
: LGTM: Implementation properly handles historical event processing setup.
The changes correctly:
- Pass through all parameters to
EvmEventReader::attach
- Add helpful operational logging
89-105
: Verify integration with historical event processing.
Please ensure that:
- Components consuming events can handle both historical and live events
- The repository is properly initialized before being passed to this method
Let's verify the integration:
✅ Verification successful
Based on the shell script results, I can now generate the final response:
Historical event processing integration is properly implemented
The codebase shows proper handling of both historical and live events:
- EventBus implements
GetHistory
handler to retrieve historical events - EvmEventReader properly handles both regular and EVM-specific events
- Repository initialization is consistently managed through
Repository::new()
with proper scoping across the codebase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if other components are prepared for historical event processing
# Test 1: Check for event handling patterns in consumers
echo "Checking event consumers..."
rg -U "impl.*Handler.*for.*Event" -A 10
# Test 2: Check repository initialization patterns
echo "Checking repository initialization..."
ast-grep --pattern 'Repository::new($_)'
Length of output: 7499
tests/basic_integration/persist.sh (1)
19-26
:
Validate environment variables and enhance security.
The script needs to validate required environment variables and handle secrets more securely.
+# Validate required environment variables
+required_vars=("CIPHERNODE_SECRET" "PRIVATE_KEY")
+for var in "${required_vars[@]}"; do
+ if [ -z "${!var}" ]; then
+ echo "Error: Required environment variable $var is not set"
+ exit 1
+ fi
+done
+
# Set the password for all ciphernodes
set_password cn1 "$CIPHERNODE_SECRET"
packages/ciphernode/evm/src/ciphernode_registry_sol.rs (3)
2-6
: LGTM: Import changes align with new functionality.
The new imports support historical event processing and actor-based state management.
Also applies to: 13-15
107-108
: Verify repository state handling implementation.
The code assumes the repository will handle the last processed block state. Let's verify the repository implementation to ensure it correctly maintains this state.
#!/bin/bash
# Search for EvmEventReaderState implementation
ast-grep --pattern 'struct EvmEventReaderState {
$$$
}'
# Search for repository state updates
rg -A 5 'impl.*Repository.*EvmEventReaderState'
Also applies to: 133-134
126-144
: Clarify the Writer implementation requirement.
There's a TODO comment about the Writer implementation. Could you clarify:
- Is this part of the historical event processing requirements?
- Should this be tracked as a separate issue?
packages/ciphernode/evm/tests/evm_reader.rs (2)
23-40
: LGTM: Well-structured event extractor implementation.
The helper function properly handles event signature matching, decoding, and error cases. The entropy field prevents event duplication in tests, which is a good practice.
Line range hint 43-98
: Add test coverage for deployment block configuration.
While the basic event emission test is good, we're missing test coverage for the deployment block configuration feature mentioned in the PR objectives. This test should verify that:
- Events before the deployment block are ignored
- Events after the deployment block are processed
#!/bin/bash
# Check if there are any tests using start_block parameter
rg "start_block.*Some" packages/ciphernode/evm/tests/
packages/ciphernode/evm/src/event_reader.rs (2)
94-107
: LGTM: Well-structured constructor with proper initialization.
The constructor properly initializes all fields and sets up the shutdown mechanism using a oneshot channel.
313-355
: LGTM: Clean trait implementations for state management.
The Snapshot, Checkpoint, and FromSnapshotWithParams trait implementations are well-structured and properly handle state serialization and deserialization.
packages/ciphernode/tests/tests/test_aggregation_and_decryption.rs (2)
Line range hint 1-624
: Verify test coverage for historical event processing
While the test suite thoroughly covers state retention and event processing, consider adding specific test cases for:
- Processing historical EVM events before live events
- Resuming event processing from the last processed block
- Handling deployment block configuration
These scenarios align with the PR objectives of implementing historical event processing.
Let's check for existing historical event test coverage:
#!/bin/bash
# Description: Look for test files covering historical event processing
rg -l "historical.*event" --type rust
Would you like help creating additional test cases for historical event processing scenarios?
55-55
: LGTM! Improved error handling for Sortition::attach
The addition of .await?
properly handles the asynchronous nature of Sortition::attach
and ensures errors are propagated correctly in the test setup. This change aligns with the broader improvements to event processing in the codebase.
Let's verify the consistency of this change across other test files:
✅ Verification successful
Let me try another search pattern to find Sortition::attach usage:
Verified: Sortition::attach usage is consistent across the codebase
The search results show that Sortition::attach
is consistently used with .await?
in all three occurrences across the codebase:
packages/ciphernode/tests/tests/test_aggregation_and_decryption.rs
packages/ciphernode/enclave_node/src/ciphernode.rs
packages/ciphernode/enclave_node/src/aggregator.rs
The change in the test file aligns with the existing implementation in the main codebase, confirming proper error handling and async/await usage.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for other occurrences of Sortition::attach to ensure consistent async/await usage
rg -A 2 "Sortition::attach" --type rust
Length of output: 127
Script:
#!/bin/bash
# Search for Sortition::attach with more context and without type filtering to ensure we don't miss any occurrences
rg -A 3 "Sortition::attach"
Length of output: 1310
packages/ciphernode/data/src/sled_store.rs (4)
11-11
: Proper handling of the optional db
field
Changing db
to Option<SledDb>
allows for safe shutdown by setting db
to None
when it's dropped. Ensure that all accesses to db
handle the None
case to prevent panics.
20-32
: Correctly return the actor address from new
method
Modifying the new
method to return Result<Addr<Self>>
and starting the actor within the method encapsulates the actor's creation and lifecycle management effectively.
30-31
: Subscribe to shutdown events after actor creation
Subscribing to the "Shutdown" event ensures that the SledStore
actor can respond appropriately when a shutdown is initiated.
47-51
: Safely handle Insert
when db
may be None
The check if let Some(ref mut db) = &mut self.db
ensures that inserts are only attempted when the database is available, preventing potential errors due to a dropped database.
Added a failing test for the bug that @hmzakhalid reported the other day as well as a bunch of changes to enable better logging and debugging - best to hold off merge until it gets fixed |
@hmzakhalid can you please review when you get a chance: Here is the test that kills the aggregator: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (15)
tests/basic_integration/test.sh (1)
Line range hint
1-3
: Consider enhancing error handlingWhile
set -eu
provides basic error handling, consider adding these improvements for more robust execution:#!/usr/bin/env bash -set -eu # Exit immediately if a command exits with a non-zero status +set -euo pipefail # Exit on error, undefined vars, and pipe failures +trap 'echo "Error: Script failed on line $LINENO"' ERRpackages/ciphernode/core/src/tag.rs (1)
5-7
: Optimize performance and add documentation forget_tag
.The current implementation clones the string on every call, which might be unnecessary. Consider returning a reference when possible.
Consider this optimization:
-pub fn get_tag() -> String { - TAG.get().cloned().unwrap_or_else(|| String::from("_")) +/// Returns the current tag used for EVM event tracking. +/// +/// Returns "_" if no tag has been set. This default tag is used for +/// contracts that don't require specific event tracking contexts. +/// +/// # Returns +/// +/// A reference to the tag string, or "_" if uninitialized. +pub fn get_tag() -> &'static str { + TAG.get().map(String::as_str).unwrap_or("_") }packages/ciphernode/enclave/Cargo.toml (1)
22-22
: Consider using workspace-level version managementThe direct version specification
"1.20.2"
differs from the project's convention of using workspace-level version management (e.g.,{ workspace = true }
). Consider moving this dependency to the workspace-level configuration for consistent dependency management.packages/ciphernode/README.md (4)
83-86
: Add language specifiers to code blocksAdd shell language specifiers to the code blocks for proper syntax highlighting.
-``` +```shell RUST_LOG=info enclave start-
+
shell
RUST_LOG="[sortition{id=cn1}]" enclave start --tag cn1Also applies to: 89-91
🧰 Tools
🪛 Markdownlint
83-83: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
80-81
: Fix grammatical issues in the documentationThe text needs some minor grammatical improvements for better readability.
-You can debug using the `RUST_LOG` environment var to alter what output is produced by the node +You can debug using the `RUST_LOG` environment variable to control the output produced by the node. -if you supply a tag as an argument you can filter for that tag +If you supply a tag as an argument, you can filter for that tag. -This helps filter noise during tests where you might have multiple instances running and you need to see the output of a specific one. +This helps filter noise during tests where you might have multiple instances running, and you need to see the output of a specific one.Also applies to: 87-88, 93-94
98-108
: Fix typo and improve code exampleThere's a typo in the struct name, and the example could be more complete.
-impl Sorition { +impl Sortition { // ... #[instrument(name="sortition", skip_all, fields(id = get_tag()))] pub async fn attach( bus: &Addr<EventBus>, store: Repository<SortitionModule>, ) -> Result<Addr<Sortition>> { - // ... + // Example implementation + let sortition = Sortition::new(bus.clone(), store); + Actor::start(sortition) } }
78-109
: Enhance debugging documentationThe debugging section could benefit from additional information to make it more comprehensive.
Consider adding:
- Available log levels (error, warn, info, debug, trace)
- Examples of combining multiple filters
- Sample output showing what developers should expect
- Common debugging scenarios and recommended log filters
- Information about how the
instrument
macro integrates with the logging systemThis would align better with the PR objectives of improving logging and debugging capabilities.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~87-~87: A comma might be missing here.
Context: ...ve startif you supply a tag as an argument you can filter for that tag
RUST_L...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~93-~93: Use a comma before ‘and’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ou might have multiple instances running and you need to see the output of a specifi...(COMMA_COMPOUND_SENTENCE)
[style] ~94-~94: Consider a shorter alternative to avoid wordiness.
Context: ...d to see the output of a specific one. In order to add tracing to a method or function it ...(IN_ORDER_TO_PREMIUM)
[uncategorized] ~95-~95: A comma might be missing here.
Context: ... In order to add tracing to a method or function it is recommended to use the `instrumen...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
🪛 Markdownlint
83-83: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
89-89: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
packages/ciphernode/enclave/src/main.rs (3)
41-42
: Add documentation for the tag parameter.Consider adding a doc comment explaining the purpose and usage of the tag parameter, which is used for tracing and logging identification.
+ /// Tag for identifying this instance in logs and traces #[arg(short, long, global = true)] tag: Option<String>,
61-67
: Optimize tag retrieval logic.The current implementation can be simplified and made more efficient by avoiding the clone operation.
- pub fn get_tag(&self) -> String { - if let Some(tag) = self.tag.clone() { - tag - } else { - "default".to_string() - } - } + pub fn get_tag(&self) -> String { + self.tag.clone().unwrap_or_else(|| "default".to_string()) + }
72-80
: Clean up commented tracing configurations.Consider removing or documenting the commented filter configurations. If they represent different deployment scenarios, consider moving them to configuration files or documentation.
packages/ciphernode/sortition/src/sortition.rs (3)
8-9
: Remove unused importBusError
The
BusError
type is imported but never used in this file.- get_tag, BusError, CiphernodeAdded, CiphernodeRemoved, EnclaveErrorType, EnclaveEvent, + get_tag, CiphernodeAdded, CiphernodeRemoved, EnclaveErrorType, EnclaveEvent,
110-123
: Consider adding specific error types for attach()The implementation looks good, but consider creating specific error types instead of using the generic Result type. This would make error handling more explicit and help callers handle different failure scenarios appropriately.
- ) -> Result<Addr<Sortition>> { + ) -> Result<Addr<Sortition>, SortitionError> {Where
SortitionError
could be:#[derive(Debug, thiserror::Error)] pub enum SortitionError { #[error("Failed to load sortition state: {0}")] LoadError(#[from] anyhow::Error), #[error("Failed to start actor: {0}")] ActorError(#[from] actix::MailboxError), }
156-162
: Consider limiting node logging verbosityWhile logging nodes is helpful for debugging, printing all nodes could be verbose for large node sets. Consider:
- Limiting the number of nodes logged
- Making verbose logging conditional on a debug flag
- "Nodes:\n\n{:?}\n", - snapshot.nodes().into_iter().collect::<Vec<_>>() + "First 5 nodes: {:?}", + snapshot.nodes().into_iter().take(5).collect::<Vec<_>>()packages/ciphernode/evm/src/event_reader.rs (2)
208-224
: Improve historical events processing error handling and logging.Consider adding more context to error handling and logging for historical events:
match provider.clone().get_logs(&historical_filter).await { Ok(historical_logs) => { info!("Fetched {} historical events", historical_logs.len()); + let mut processed = 0; for log in historical_logs { let block_number = log.block_number; if let Some(event) = extractor(log.data(), log.topic0(), chain_id) { trace!("Processing historical log"); processor.do_send(EnclaveEvmEvent::new(event, block_number)); + processed += 1; + } else { + warn!( + "Failed to extract historical event at block {}: invalid format", + block_number.unwrap_or_default() + ); } } + info!("Successfully processed {}/{} historical events", processed, historical_logs.len()); } Err(e) => { - warn!("Failed to fetch historical events: {}", e); + error!( + "Failed to fetch historical events from block {}: {}", + start_block.unwrap_or(0), e + ); bus.err(EnclaveErrorType::Evm, anyhow!(e)); return; } }
316-318
: Add error handling for snapshot creation.The
snapshot
method should handle potential clone failures:fn snapshot(&self) -> Self::Snapshot { - self.state.clone() + self.state.clone().map_err(|e| { + error!("Failed to create snapshot: {}", e); + e + }) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
packages/ciphernode/Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (14)
packages/ciphernode/README.md
(1 hunks)packages/ciphernode/core/src/tag.rs
(1 hunks)packages/ciphernode/data/src/data_store.rs
(1 hunks)packages/ciphernode/enclave/Cargo.toml
(1 hunks)packages/ciphernode/enclave/src/commands/aggregator/start.rs
(1 hunks)packages/ciphernode/enclave/src/commands/start.rs
(1 hunks)packages/ciphernode/enclave/src/main.rs
(4 hunks)packages/ciphernode/enclave_node/src/ciphernode.rs
(5 hunks)packages/ciphernode/evm/src/event_reader.rs
(3 hunks)packages/ciphernode/sortition/src/sortition.rs
(8 hunks)tests/basic_integration/base.sh
(1 hunks)tests/basic_integration/fns.sh
(1 hunks)tests/basic_integration/persist.sh
(1 hunks)tests/basic_integration/test.sh
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- packages/ciphernode/data/src/data_store.rs
- packages/ciphernode/enclave/src/commands/aggregator/start.rs
- packages/ciphernode/enclave/src/commands/start.rs
- tests/basic_integration/base.sh
- tests/basic_integration/persist.sh
🧰 Additional context used
📓 Learnings (1)
tests/basic_integration/fns.sh (2)
Learnt from: ryardley
PR: gnosisguild/enclave#107
File: tests/basic_integration/test.sh:27-31
Timestamp: 2024-11-10T16:42:46.226Z
Learning: In the scripts, quoting the command substitution in the `kill` command within the `cleanup` function can cause the script to fail, so it's acceptable to leave it unquoted.
Learnt from: ryardley
PR: gnosisguild/enclave#107
File: tests/basic_integration/test.sh:27-31
Timestamp: 2024-11-10T16:42:46.226Z
Learning: In the scripts, quoting the command substitution in the `kill` command within the `cleanup` function can cause the script to fail, so it's acceptable to leave it unquoted.
🪛 LanguageTool
packages/ciphernode/README.md
[uncategorized] ~87-~87: A comma might be missing here.
Context: ...ve start if you supply a tag as an argument you can filter for that tag
RUST_L...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~93-~93: Use a comma before ‘and’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ou might have multiple instances running and you need to see the output of a specifi...
(COMMA_COMPOUND_SENTENCE)
[style] ~94-~94: Consider a shorter alternative to avoid wordiness.
Context: ...d to see the output of a specific one. In order to add tracing to a method or function it ...
(IN_ORDER_TO_PREMIUM)
[uncategorized] ~95-~95: A comma might be missing here.
Context: ... In order to add tracing to a method or function it is recommended to use the `instrumen...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
🪛 Markdownlint
packages/ciphernode/README.md
83-83: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
89-89: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 Shellcheck
tests/basic_integration/fns.sh
[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.
(SC2148)
[warning] 5-5: PLAINTEXT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 14-14: RPC_URL appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 16-16: PRIVATE_KEY appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 17-17: CIPHERNODE_SECRET appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 21-21: ENCLAVE_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 22-22: REGISTRY_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 23-23: REGISTRY_FILTER_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 24-24: INPUT_VALIDATOR_CONTRACT appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 27-27: CIPHERNODE_ADDRESS_1 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 28-28: CIPHERNODE_ADDRESS_2 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 29-29: CIPHERNODE_ADDRESS_3 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 30-30: CIPHERNODE_ADDRESS_4 appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 36-36: Quote this to prevent word splitting.
(SC2046)
[warning] 58-58: Declare and assign separately to avoid masking return values.
(SC2155)
[error] 71-71: Argument mixes string and array. Use * or separate argument.
(SC2145)
[warning] 118-118: Declare and assign separately to avoid masking return values.
(SC2155)
🪛 Gitleaks
tests/basic_integration/fns.sh
16-16: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🔇 Additional comments (17)
tests/basic_integration/test.sh (2)
5-5
: LGTM: Robust directory path resolution
The use of dirname
with BASH_SOURCE[0]
is the correct approach for determining the script's directory, regardless of where it's called from.
Line range hint 1-11
: Verify the integration test changes
The script has been significantly refactored to use a more modular approach. We should verify that all the previous test scenarios are still covered by the new scripts.
✅ Verification successful
Let me gather more information about the test scenarios by examining the actual content of these scripts.
Integration test changes are well-structured and maintain full test coverage
The refactoring has successfully modularized the tests while preserving all essential test scenarios:
test.sh
acts as the main runner, executing bothpersist.sh
andbase.sh
by defaultbase.sh
covers the basic E2E flow: node setup, committee formation, encryption, and decryptionpersist.sh
adds persistence testing by killing and relaunching the aggregator mid-testfns.sh
centralizes shared functions and configuration
All critical testing patterns are present across the scripts:
- Node/aggregator launching and lifecycle management
- Proper cleanup and error handling with traps
- Wait conditions and synchronization
- Environment setup and configuration
- Verification of outputs
The modular approach has improved maintainability while retaining complete functional coverage.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if all test scenarios from the old implementation are covered in the new scripts
# List all test scripts in the directory
echo "Available test scripts:"
ls tests/basic_integration/*.sh
# Check for common testing patterns in the scripts
echo -e "\nChecking for essential testing patterns:"
rg -l "launch.*node|launch.*aggregator" tests/basic_integration/
rg -l "cleanup|trap.*EXIT" tests/basic_integration/
rg -l "waiton|sleep|wait" tests/basic_integration/
# Look for environment variable usage
echo -e "\nEnvironment variable definitions:"
rg "^[A-Z_]+=.+" tests/basic_integration/
Length of output: 2906
Script:
#!/bin/bash
# Check the content of each test script to understand their scenarios
echo "=== Content of test.sh (main test runner) ==="
cat tests/basic_integration/test.sh
echo -e "\n=== Content of base.sh (base test scenario) ==="
cat tests/basic_integration/base.sh
echo -e "\n=== Content of persist.sh (persistence test scenario) ==="
cat tests/basic_integration/persist.sh
echo -e "\n=== Content of fns.sh (shared functions) ==="
cat tests/basic_integration/fns.sh
Length of output: 12247
packages/ciphernode/core/src/tag.rs (1)
1-12
: Verify integration with EVM event processing system.
Please ensure this tag implementation properly integrates with the EVM event processing system, particularly:
- Tag persistence across node restarts
- Compatibility with historical event processing
- Integration with the event cache mentioned in issue Ensure EVM event tracking processes historical events #171
Let's verify the integration:
✅ Verification successful
Based on the shell script results, I can now generate the final response:
Tag implementation aligns well with EVM event processing system
The tag implementation is properly integrated with the event processing system:
-
Tag persistence: The tag is stored in a
OnceLock<String>
which is initialized once at startup via CLI and remains constant throughout the node's lifecycle. This is appropriate since:- Historical events are processed in order with consistent tagging
- Event processing modules (
event_reader.rs
,sortition.rs
) use the tag for tracing/logging only - No persistence across restarts is needed as the tag is CLI-driven
-
Event processing integration:
- The tag is correctly used in EVM event reader (
event_reader.rs
) for tracing event processing - Event cache integration is properly handled through the snapshot/restore mechanism, independent of the tag system
- The tag is used consistently across all event handlers for tracing purposes
- The tag is correctly used in EVM event reader (
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check integration points with EVM event processing
# Check for event processing related files
echo "Checking for event processing files..."
fd -e rs "event|cache|processor"
# Look for tag usage in event processing code
echo "Checking tag usage in event processing..."
rg "get_tag|set_tag" -A 3 -B 3
# Check for persistence-related code
echo "Checking persistence implementation..."
rg "serialize|deserialize|restore|snapshot" -t rust
Length of output: 29846
packages/ciphernode/enclave/Cargo.toml (1)
22-22
: Verify once_cell version and security advisories
The addition of once_cell is appropriate for maintaining event processing state, but let's verify the version.
✅ Verification successful
✅ Version and security check passed for once_cell
The version 1.20.2 is currently the latest stable release on crates.io. The only known security advisory affects versions < 1.0.1 (from 2021), while we're using 1.20.2 which is well above the patched version.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check latest versions and security advisories for once_cell
# Check crates.io for latest versions
curl -s https://crates.io/api/v1/crates/once_cell | jq '.crate.max_stable_version'
# Check for security advisories
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: RUST, package: "once_cell") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
Length of output: 645
packages/ciphernode/enclave_node/src/ciphernode.rs (4)
25-29
: LGTM: Enhanced function signature with tracing support
The addition of tracing instrumentation and peer_id to the return type improves observability and P2P functionality.
50-57
: Verify deployment block configuration for historical event processing
The addition of deploy_block
parameter is crucial for historical event processing. However, we should verify the deployment block configuration to ensure proper initialization.
Let's verify the deployment block configuration:
#!/bin/bash
# Description: Verify enclave contract deployment block configuration
# Expected: Deployment block should be properly configured for historical event processing
# Search for enclave contract deployment block configuration
rg -A 3 "enclave.*deploy_block"
# Check for any TODOs or configuration issues
rg -i "todo|fixme|hack|xxx" | grep -i "deploy.*block"
58-65
: LGTM: Consistent implementation of historical event processing
The CiphernodeRegistrySol attachment follows the same pattern as EnclaveSolReader, maintaining consistency in historical event processing implementation.
79-79
: LGTM: Return value properly includes peer_id
The return value correctly includes the peer_id as specified in the function signature.
packages/ciphernode/enclave/src/main.rs (3)
5-7
: LGTM: New imports support tracing functionality.
The added imports for tracing and tag management are appropriate for the new functionality.
Line range hint 46-59
: LGTM: Tracing instrumentation properly implemented.
The tracing instrumentation is correctly configured with appropriate field attribution and self-skipping.
83-91
: Verify integration with historical EVM event processing.
While the tracing changes look good, there seems to be a disconnect between these changes and the PR's primary objective of processing historical EVM events. Please clarify how this tracing enhancement supports the historical event processing functionality.
✅ Verification successful
Let me check the relationship between the tracing changes and EVM event processing by examining the relevant files.
Based on the results, I can now generate the final response since I have enough information about how tracing is integrated with the EVM event processing.
The tracing changes are well-integrated with historical EVM event processing
The tracing functionality is actively used within the EVM event processing system, specifically in event_reader.rs
, where it logs both historical and live events:
trace!("Processing historical log")
when processing historical eventstrace!("Received log from EVM")
for live event processing
The tracing enhancement supports debugging and monitoring of the historical event processing by providing visibility into the event processing pipeline, including success and failure cases.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for EVM event processing related code that might use this tracing
rg -l "EvmEventReader|historical.*event"
Length of output: 360
Script:
#!/bin/bash
# Check how tracing is used in EVM event processing
rg -A 5 "set_tag|trace!" packages/ciphernode/evm/
# Check the event reader implementation
ast-grep --pattern 'struct EvmEventReader {
$$$
}'
# Look for historical event processing implementation
rg -B 2 -A 5 "historical.*event"
Length of output: 5490
packages/ciphernode/sortition/src/sortition.rs (4)
40-42
: LGTM! Good encapsulation practice.
The nodes()
method provides safe read-only access to the internal HashSet.
125-133
: LGTM! Well-structured state loading with good logging.
The load implementation properly handles both snapshot and fresh initialization cases with appropriate tracing.
172-173
: LGTM! Good optimization.
Returning a reference instead of cloning the repository improves performance.
191-195
: LGTM! Consistent tracing implementation.
The tracing implementation is consistent across all handlers and provides good observability.
Also applies to: 202-206, 213-221
packages/ciphernode/evm/src/event_reader.rs (1)
191-191
: Verify event type coverage in extractor function.
Let's verify that the extractor function handles all event types emitted by the contract.
✅ Verification successful
Let me gather more information about the event types and their handling.
Let me search for the event enum definitions to complete the verification.
Let me try a different approach to find the event definitions.
Based on the gathered information, I can now provide a final response:
Event extractor implementation verified and complete
The extractor function correctly handles all contract events:
- IEnclave events:
E3Requested
with chain ID handlingCiphertextOutputPublished
- ICiphernodeRegistry events:
CiphernodeAdded
CiphernodeRemoved
Each event has proper error handling, type conversion implementations (via From trait), and is correctly mapped to the core domain events.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all event definitions in the contract and compare with extractor implementation
# Expected results: All contract events should have corresponding handling in the extractor
# Find contract event definitions
echo "Finding contract events..."
ast-grep --pattern 'event $NAME($$$);'
# Find extractor implementation
echo "Finding extractor implementation..."
rg -A 5 'fn.*extractor.*LogData.*EnclaveEvent'
Length of output: 1614
Script:
#!/bin/bash
# Search for all event handlers in the extractor implementations
echo "=== Searching for event handlers in extractors ==="
rg -B 1 -A 10 'match topic \{' packages/ciphernode/evm/src/
# Search for event signatures/topics
echo -e "\n=== Searching for event signature definitions ==="
rg "::SIGNATURE_HASH" packages/ciphernode/evm/src/
# Search for event structs/types
echo -e "\n=== Searching for event struct definitions ==="
rg "struct.*Event" packages/ciphernode/evm/src/
Length of output: 3984
Script:
#!/bin/bash
# Search for EnclaveEvent enum definition
echo "=== Searching for EnclaveEvent enum ==="
rg -A 10 'enum EnclaveEvent' packages/ciphernode/evm/src/
# Search for event interfaces to see all possible events
echo -e "\n=== Searching for event interfaces ==="
rg "interface I(Enclave|CiphernodeRegistry)" -A 10 packages/ciphernode/evm/src/
Length of output: 323
Script:
#!/bin/bash
# Search for all Rust enums in the codebase
echo "=== Searching for event-related enums ==="
rg -B 2 -A 10 'pub enum' packages/ciphernode/evm/src/
# Search for From implementations for events
echo -e "\n=== Searching for From implementations for events ==="
rg -B 2 -A 10 'impl From<.*>' packages/ciphernode/evm/src/
# Search for all event-related types
echo -e "\n=== Searching for event-related types ==="
rg -B 2 'type.*Event' packages/ciphernode/evm/src/
Length of output: 9568
tests/basic_integration/fns.sh (1)
5-5
: Variables assigned but not used within the script
The following variables are assigned but not used within this script:
PLAINTEXT
(line 5)RPC_URL
(line 14)CIPHERNODE_SECRET
(line 17)ENCLAVE_CONTRACT
(line 21)REGISTRY_CONTRACT
(line 22)REGISTRY_FILTER_CONTRACT
(line 23)INPUT_VALIDATOR_CONTRACT
(line 24)CIPHERNODE_ADDRESS_1
toCIPHERNODE_ADDRESS_4
(lines 27-30)
If these variables are intended to be used by child processes or other scripts, consider exporting them using the export
command. If they are unnecessary, you may remove them to clean up the script.
Also applies to: 14-14, 17-17, 21-24, 27-30
🧰 Tools
🪛 Shellcheck
[warning] 5-5: PLAINTEXT appears unused. Verify use (or export if used externally).
(SC2034)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Its good to go in |
Closes: #171
This PR enables ciphernodes to read historical evm events from rpc-nodes. A node should attempt to load historical events first before listening for live EVM events. The
EvmEventReader
will hold state of the events it has processed and resume processing from that point if it is shutdown and resumed.This adds an optional config option to provide the deployment block for a contract:
The deploy_block is passed to our call to get historical events.
As we create deployments we will want to make note of and supply these to default configurations for the ciphernodes. This is much easier than attempting to get this information from on-chain sources.
EvmEventReader
as it is generaic and a real pain to test and manageSummary by CodeRabbit
Release Notes
New Features
Contract
enum for better handling of contract addresses in configurations.tag
parameter for better command context management.DataStore
andRepository
, improving data handling capabilities.Bug Fixes
Documentation
Tests