-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify usage of FixedHash
field on Block
type
#520
Simplify usage of FixedHash
field on Block
type
#520
Conversation
WalkthroughThe changes involve updates to the Changes
Sequence Diagram(s)sequenceDiagram
participant Block as Block
participant Hash as Hash
participant Test as Test
Block->>Hash: Set FixedHash to gethCommon.Hash
Hash-->>Block: Return hash value
Test->>Block: Validate block hash
Test-->>Test: Assert hash matches expected value
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Outside diff range, codebase verification and nitpick comments (2)
models/block.go (2)
49-50
: Clarification onFixedHash
field documentation.The comment above the
FixedHash
field explains the purpose of this field well, ensuring that the hash calculated with the previous format is preserved. However, it might be beneficial to clarify further why this specific implementation choice (usinggethCommon.Hash
instead of a pointer) improves safety and simplicity, as mentioned in the PR summary.
121-121
: Ensure consistency in hash assignment during legacy block decoding.The assignment of
FixedHash
directly from theblock.Hash
in thedecodeLegacyBlockEvent
function is a good use of the newFixedHash
type. This ensures that the hash is preserved accurately during the decoding of legacy events. It would be beneficial to add a comment here explaining that this change is part of the transition to the new hash handling method.
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (3)
- models/block.go (9 hunks)
- models/block_test.go (4 hunks)
- tests/web3js/eth_non_interactive_test.js (1 hunks)
Additional comments not posted (5)
models/block.go (2)
59-60
: Approval of the updated hash check logic.The updated logic in the
Hash()
method to check ifFixedHash
is not the empty hash (gethCommon.HexToHash("0x0")
) is a clear improvement over the previous need to check for nil and an empty string. This change simplifies the code and reduces potential errors related to nil dereferencing.
175-175
: Validate the newFixedHash
assignment indecodeBlockBreakingChanges
.The assignment of
FixedHash
toblockHash
in thedecodeBlockBreakingChanges
function is a crucial part of ensuring that blocks decoded from older formats retain their original hash. This change aligns with the PR's goal of simplifying hash handling. Ensure that this change is thoroughly tested, especially since it involves historical data and could impact data integrity during re-indexing.Verification successful
The
FixedHash
assignment is properly tested indecodeBlockBreakingChanges
. TheTest_FixedHashBlock
function inmodels/block_test.go
ensures that theFixedHash
is retained correctly during encoding and decoding, addressing the review comment's concern about data integrity.
models/block_test.go
:Test_FixedHashBlock
verifies theFixedHash
functionality.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the `FixedHash` assignment in `decodeBlockBreakingChanges` retains the original hash. # Test: Search for the function usage. Expect: Only occurrences of the new signature. rg --type go -A 5 $'decodeBlockBreakingChanges'Length of output: 763
Script:
#!/bin/bash # Search for test files or functions that involve `decodeBlockBreakingChanges` or `FixedHash`. rg --type go 'decodeBlockBreakingChanges|FixedHash' --glob '*_test.go'Length of output: 175
Script:
#!/bin/bash # Extract and review the `Test_FixedHashBlock` function from `models/block_test.go`. rg --type go -A 20 'func Test_FixedHashBlock' models/block_test.goLength of output: 513
models/block_test.go (2)
35-42
: Improved hash comparison in tests.The update to use
gethCommon.HexToHash
for direct hash comparison instead of converting to strings enhances type safety and clarity in the test assertions.
77-77
: Enhanced handling and testing ofFixedHash
.The changes to directly use
gethCommon.Hash
forFixedHash
and update the assertions accordingly improve the clarity and efficiency of the tests. This aligns well with the PR's objectives to simplify and enhance safety in hash handling.Also applies to: 82-82, 91-91, 102-102
tests/web3js/eth_non_interactive_test.js (1)
28-28
: Verify the updated block size expectation.The change in the expected block size from
3995n
to4028n
needs verification to ensure it aligns with the updated blockchain logic or data changes. It's crucial to confirm that this new expected size is accurate based on the modifications made to the blockchain's behavior.Run the following script to verify the block size calculation:
d4282c7
to
65ec134
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (3)
- models/block.go (9 hunks)
- models/block_test.go (4 hunks)
- tests/web3js/eth_non_interactive_test.js (1 hunks)
Files skipped from review as they are similar to previous changes (3)
- models/block.go
- models/block_test.go
- tests/web3js/eth_non_interactive_test.js
Yeah this is nicer, I did have this approach but with a pointer, I should remove the pointer. I think this is good but I wouldn't do it unless we plan a reindex anyway. So we can keep this in draft until then and add a comment in the description |
65ec134
to
e385f5d
Compare
e385f5d
to
b9c8e99
Compare
b9c8e99
to
58ff8d8
Compare
58ff8d8
to
f8d4670
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 17
🧹 Outside diff range and nitpick comments (38)
models/receipt_test.go (1)
13-13
: Consider adding test coverage for the new return value.The change correctly handles the updated
decodeTransactionEvent
function signature by capturing the new*events.TransactionEventPayload
return value. However, since we're discarding this value with_
, consider adding test cases to validate this new return value's correctness.storage/mocks/mocks.go (1)
30-30
: Consider making TotalGasUsed configurableThe hardcoded value of 30,000 for TotalGasUsed might not be suitable for all test scenarios.
Consider making it a parameter of the NewBlock function:
-func NewBlock(height uint64) *models.Block { +func NewBlock(height uint64, totalGasUsed ...uint64) *models.Block { // ... + gasUsed := uint64(30_000) // default value + if len(totalGasUsed) > 0 { + gasUsed = totalGasUsed[0] + } // ... - TotalGasUsed: uint64(30_000), + TotalGasUsed: gasUsed,tests/web3js/eth_get_storage_at_test.js (4)
6-11
: Add descriptive test name and error handling.The test setup could be improved by:
- Adding a more descriptive test name that explains the test scenarios
- Adding error handling for the contract deployment
-it('should retrieve storage slots of contracts', async () => { +it('should correctly retrieve storage slots at different block heights', async () => { let deployed = await helpers.deployContract('storage') + assert.ok(deployed, 'Contract deployment failed') let contractAddress = deployed.receipt.contractAddress + assert.ok(contractAddress, 'Contract address is undefined') // make sure deploy results are correct assert.equal(deployed.receipt.status, conf.successStatus)
13-20
: Enhance storage slot validation and documentation.The initial storage validation could be improved by:
- Better documenting the storage layout
- Validating the initial value is actually 0
// get the default deployed value on contract let callRetrieve = await deployed.contract.methods.retrieve().encodeABI() let result = await web3.eth.call({ to: contractAddress, data: callRetrieve }, 'latest') - let slot = 0 // The slot for the 'number' variable + // Storage layout: + // slot 0: uint256 number (main storage variable) + const STORAGE_SLOT = 0 - let stored = await web3.eth.getStorageAt(contractAddress, slot, 'latest') + let stored = await web3.eth.getStorageAt(contractAddress, STORAGE_SLOT, 'latest') assert.equal(stored, result) + // Verify initial value is 0 + let value = web3.eth.abi.decodeParameter('uint256', stored) + assert.equal(value, 0n, 'Initial storage value should be 0')
21-37
: Improve value update readability and gas handling.Consider the following improvements:
- Use named constants for test values
- Add gas estimation validation
+ // Test values + const INITIAL_TEST_VALUE = 1337n + // set the value on the contract, to its current value - let initValue = 1337 + let initValue = INITIAL_TEST_VALUE let updateData = deployed.contract.methods.store(initValue).encodeABI() + + // Estimate gas to ensure transaction won't fail + const gasEstimate = await web3.eth.estimateGas({ + from: conf.eoa.address, + to: contractAddress, + data: updateData + }) + assert.ok(gasEstimate > 0, 'Gas estimation failed')
1-72
: Overall assessment: Good test coverage with room for improvement.The test thoroughly validates storage slot retrieval across different block heights, which is crucial for the PR's objective of migrating the
FixedHash
field. The test structure is good, but could be enhanced by:
- Adding more error cases (e.g., invalid slots, non-existent contracts)
- Testing with larger values to ensure proper handling of different storage sizes
- Adding comments explaining the relationship between block heights and expected values
These improvements would make the test more robust and maintainable.
tests/web3js/eth_revert_reason_test.js (1)
80-93
: Extract duplicated polling logic into a helper function.The polling logic is duplicated. Consider extracting it into a reusable helper function to improve maintainability and ensure consistent behavior.
Create a helper function in
helpers.js
:// Add to helpers.js async function waitForTransactionReceipt(txHash, maxAttempts = 50) { let rcp = null; let attempts = 0; while (rcp == null && attempts < maxAttempts) { rcp = await helpers.callRPCMethod( 'eth_getTransactionReceipt', [txHash] ); if (rcp.body.result == null) { rcp = null; attempts++; await new Promise(resolve => setTimeout(resolve, 500)); } } if (rcp == null) { throw new Error('Timeout waiting for transaction receipt'); } return rcp; }Then simplify both test cases:
-let rcp = null -// wait until the transaction is executed & indexed, and its -// receipt becomes available. -while (rcp == null) { - rcp = await helpers.callRPCMethod( - 'eth_getTransactionReceipt', - [txHash] - ) - if (rcp.body.result == null) { - rcp = null - } -} +let rcp = await helpers.waitForTransactionReceipt(txHash);tests/web3js/verify_cadence_arch_calls_test.js (2)
14-32
: Consider parameterizing the height value.The hardcoded height value
120
inverifyArchCallToRandomSource
might make the test brittle. Consider making it configurable or relative to the current block height.- let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(120).encodeABI() + const testHeight = latest - 10 // or from config + let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(testHeight).encodeABI()
70-92
: Document test data and expected outcomes.The test uses hardcoded values without explaining their significance:
- The purpose of the specific hash
0x1bacdb...
- The meaning of the encoded proof data
- Why
false
is the expected resultConsider adding comments to explain these values and their expected behavior.
+ // Example hash representing an invalid/test ownership proof let verifyCOAOwnershipProofData = deployed.contract.methods.verifyArchCallToVerifyCOAOwnershipProof( tx.to, '0x1bacdb569847f31ade07e83d6bb7cefba2b9290b35d5c2964663215e73519cff', web3.utils.hexToBytes('f853c18088f8d6e0586b0a20c78365766df842b840b90448f4591df2639873be2914c5560149318b7e2fcf160f7bb8ed13cfd97be2f54e6889606f18e50b2c37308386f840e03a9fff915f57b2164cba27f0206a95') ).encodeABI()
Also, consider extracting test data to a configuration file for better maintainability.
models/block.go (2)
67-67
: Consider enhancing error context.The function changes look good, but consider including the event type in the error message for better debugging.
- "failed to Cadence-decode EVM block event [%s]: %w", + "failed to Cadence-decode EVM block event [type=%s, event=%s]: %w", + event.Type(), event.String(), err,Also applies to: 70-70, 77-77, 82-82, 97-97
49-50
: Plan the migration strategy carefully.The type change from
*string
togethCommon.Hash
improves the architecture but requires careful migration planning:
- Consider adding a version field to the block structure to help with future migrations
- Document the migration process for operators
- Add metrics to track migration progress
- Consider providing a rollback strategy
Also applies to: 59-60, 155-155
tests/e2e_web3js_test.go (1)
Line range hint
31-71
: Consider scope of test additions.While the test additions are valuable, they seem to extend beyond the PR's objective of simplifying
FixedHash
field usage. Suggestions:
- Move the skipped tests (
verify Cadence arch calls
andtest transaction traces
) to separate PRs to maintain focused changes- Add explicit test cases in
eth_get_storage_at_test
to verify the behavior with the newFixedHash
field implementationmodels/events.go (1)
39-44
: Consider documenting re-indexing requirements.These changes to event handling, while well-implemented, are part of a larger change that requires re-indexing. Consider adding a comment in the code or updating documentation to note this requirement for future maintainers.
Also applies to: 116-122, 127-133, 169-173, 181-186
services/ingestion/engine.go (1)
39-39
: Consider documenting the blocksProvider field.The
blocksProvider
field should be documented in the struct's comment block to explain its purpose and relationship with block processing.services/replayer/blocks_provider_test.go (4)
55-67
: Fix duplicate test case name.The test case name "with new block non-sequential to latest block" is duplicated from the previous test case, but this test actually verifies sequential block processing (10 followed by 11).
Consider renaming to better reflect the test's purpose:
- t.Run("with new block non-sequential to latest block", func(t *testing.T) { + t.Run("with new block sequential to latest block", func(t *testing.T) {
69-98
: Consider adding more test scenarios for BlockContext.While the current test thoroughly validates all context properties, consider adding test cases for:
- Historic blocks
- Edge cases (e.g., block 0)
- Invalid block heights
99-198
: Consider extracting the magic number 256 into a named constant.The value 256 appears multiple times in the test and seems to be a significant boundary for block height ranges. Consider extracting it into a named constant to improve readability and maintainability.
+const blockHeightRange = 256
282-291
: Consider making tracer configuration more flexible.The tracer configuration is currently hardcoded. Consider making it configurable through parameters to support different test scenarios.
-func newCallTracer(t *testing.T) *tracers.Tracer { +func newCallTracer(t *testing.T, config json.RawMessage) *tracers.Tracer { + if config == nil { + config = json.RawMessage(`{ "onlyTopCall": true }`) + } tracer, err := tracers.DefaultDirectory.New( "callTracer", &tracers.Context{}, - json.RawMessage(`{ "onlyTopCall": true }`), + config, ) require.NoError(t, err) return tracer }models/events_test.go (2)
Line range hint
197-224
: LGTM! Consider enhancing test readability.The block hash validation logic correctly tests the new
gethCommon.Hash
implementation. The test properly ensures that the block event payload contains the correct block hash and height.Consider adding a descriptive comment before the block event payload assertions to improve test readability:
// assert we have collected the EVM.BlockExecuted event payload blockEventPayload := cdcEvents.BlockEventPayload() +// Verify that the block hash in the event payload matches the expected hash blockHash, err := block.Hash() require.NoError(t, err) assert.Equal(t, blockHash, blockEventPayload.Hash)
233-237
: LGTM! Consider enhancing test documentation.The transaction event payload validation correctly ensures consistency between block and transaction events, verifying both hash matching and block height correlation.
Consider expanding the comment to better describe the validation being performed:
-// assert we have collected the EVM.TransactionExecuted event payloads -// in their correct order. +// Verify that transaction event payloads: +// 1. Are collected in the correct order +// 2. Have matching transaction hashes +// 3. Reference the correct block height from the block event txEventPayload := cdcEvents.TxEventPayloads()[i] assert.Equal(t, tx.Hash(), txEventPayload.Hash) assert.Equal(t, blockEventPayload.Height, txEventPayload.BlockHeight)services/requester/requester.go (2)
Line range hint
492-502
: RefactorevmToCadenceHeight
to useuint64
instead ofint64
The
evmToCadenceHeight
method currently acceptsint64
and has special handling for negative values. This could be simplified by changing the parameter type touint64
and removing the negative check.-func (e *EVM) evmToCadenceHeight(height int64) (uint64, error) { - if height < 0 { - return LatestBlockHeight, nil - } - - evmHeight := uint64(height) +func (e *EVM) evmToCadenceHeight(height uint64) (uint64, error) { + evmHeight := height
Line range hint
87-122
: Document the special handling of block heights in the interfaceThe interface implementation treats negative block heights as a request for the latest block, but this behavior is not documented in the interface comments. This implicit behavior should be explicitly documented.
Add interface-level documentation explaining the block height handling:
// TODO(janezp): Requester does need to know about special EVM block heights. evmHeight should be uint64. type Requester interface { + // Block height handling: + // - Negative values: Latest block height will be used + // - Zero or positive values: Specific block height will be usedservices/replayer/blocks_provider.go (2)
79-85
: Clarify error message inOnBlockReceived
for better understandingThe current error message may be unclear:
"received new block: %d, non-sequential of latest block: %d"
.Consider rephrasing the error message for clarity:
return fmt.Errorf( - "received new block: %d, non-sequential of latest block: %d", + "received block %d, which is not sequential to the latest block %d", block.Height, bp.latestBlock.Height, )
92-112
: Consider caching blocks to enhance performanceIf
GetSnapshotAt
is frequently called for the same heights, implementing caching can reduce database access and improve performance.services/ingestion/event_subscriber.go (9)
Line range hint
37-93
: Potential Data Race onr.height
in Concurrent AccessThe
height
field inRPCEventSubscriber
is accessed and modified within goroutines without synchronization. In theSubscribe
method,r.height
is updated during backfilling (lines 88 and 93) while being read elsewhere. This can lead to data races ifSubscribe
is called concurrently or if multiple goroutines accessr.height
.Consider adding synchronization mechanisms, such as mutex locks, to protect access to
r.height
or ensure thatSubscribe
is not called concurrently.
Line range hint
138-150
: Unprotected Access tor.recovery
andr.recoveredEvents
The fields
r.recovery
andr.recoveredEvents
are accessed and modified within goroutines without proper synchronization. In methods likesubscribe
,recover
, andaccumulateEventsMissingBlock
, these fields are read and written from multiple goroutines, potentially causing data races.Recommend adding synchronization (e.g., mutexes) to protect access to
r.recovery
andr.recoveredEvents
, ensuring thread-safe operations.Also applies to: 294-307
339-359
: Function Naming Consistency: RenameblocksFilter
toblockFilter
The function
blocksFilter
is named inconsistently with the comment that refers to it asblockFilter
. For clarity and consistency, it would be better to unify the naming.Consider renaming the function to
blockFilter
to match the comment and improve readability.Apply this diff:
-func blocksFilter(chainId flowGo.ChainID) flow.EventFilter { +func blockFilter(chainId flowGo.ChainID) flow.EventFilter {And update all references accordingly.
126-131
: Improve Error Message inSubscribeEventsByBlockHeight
The error message when failing to subscribe lacks sufficient context.
Enhance the error message for clarity:
if err != nil { eventsChan <- models.NewBlockEventsError( - fmt.Errorf("failed to subscribe to events by block height: %d, with: %w", height, err), + fmt.Errorf("failed to subscribe to events starting at block height %d: %w", height, err), ) return eventsChan }
Line range hint
295-301
: Data Race When Appending tor.recoveredEvents
Appending to
r.recoveredEvents
without synchronization can cause data races since slices are not thread-safe.Implement synchronization when modifying
r.recoveredEvents
or use thread-safe data structures likesync.Mutex
to protect access.
339-359
: Optimize Event Type ID CalculationsThe event type IDs in
blocksFilter
are recalculated every time the function is called.Consider caching these IDs if
blocksFilter
is called frequently to improve performance.var ( blockExecutedEventID string transactionExecutedEventID string ) func init() { // Initialize the event type IDs once } func blocksFilter(chainId flowGo.ChainID) flow.EventFilter { // Use the cached event type IDs }
50-55
: Initializerecovery
andrecoveredEvents
FieldsIn
NewRPCEventSubscriber
, the fieldsrecovery
andrecoveredEvents
are not explicitly initialized.Initialize these fields for clarity, even though they default to zero values.
return &RPCEventSubscriber{ logger: logger, client: client, chain: chainID, height: startHeight, + recovery: false, + recoveredEvents: nil, }
27-27
: Update Interface Documentation After Method Signature ChangeThe
Subscribe
method in theEventSubscriber
interface no longer accepts aheight
parameter, but the documentation still references it.Update the method comment to reflect the new signature.
// Subscribe to EVM events ~from the provided height~, and return a channel with the events. // // The BlockEvents type will contain an optional error in case // the error happens, the consumer of the channel should handle it.
Line range hint
273-275
: Simplify Error Handling infetchMissingData
The error message could be more informative by specifying the expected and actual counts.
Modify the error message for clarity:
if len(recoveredEvents) != 1 { return models.NewBlockEventsError( fmt.Errorf( - "received %d but expected 1 event for height %d", + "expected 1 event for height %d but received %d", len(recoveredEvents), blockEvents.Height, ), ) }bootstrap/bootstrap.go (1)
137-144
: Enhance error context when initializing the tracer.Currently, the error returned when initializing the tracer lacks context. Wrapping the error message will provide more clarity on where the failure occurred.
Apply this diff to wrap the error with additional context:
tracer, err := tracers.DefaultDirectory.New( callTracerName, &tracers.Context{}, json.RawMessage(callTracerConfig), ) if err != nil { - return err + return fmt.Errorf("failed to initialize tracer: %w", err) }tests/web3js/debug_traces_test.js (4)
16-229
: Consider splitting the long test case into smaller, focused testsThe test function
should retrieve transaction traces
spans over 200 lines and covers multiple scenarios, including testing different tracers and transaction states. Splitting it into smaller, specific test cases improves readability and maintainability.
255-256
: Ensure consistent validation of response bodiesIn some places, you're checking
assert.isDefined(response.body.result)
, while in others you checkassert.isDefined(response.body)
. For consistency and to ensure accurate validations, align the assertions based on the expected structure of the response.For example:
// If the expected property is `response.body.result`, update assertions accordingly - assert.isDefined(response.body) + assert.isDefined(response.body.result)Also applies to: 282-283, 309-310, 327-328, 401-402, 434-435, 448-449
46-46
: Improve readability of the inline JavaScript tracer codeThe JavaScript tracer code is written as a single-line string, which is hard to read and maintain. Defining it as a multi-line template string enhances readability.
Apply this diff to enhance readability:
- let jsTracer = '{hist: {}, nops: 0, step: function(log, db) { var op = log.op.toString(); if (this.hist[op]){ this.hist[op]++; } else { this.hist[op] = 1; } this.nops++; }, fault: function(log, db) {}, result: function(ctx) { return this.hist; }}' + let jsTracer = ` + { + hist: {}, + nops: 0, + step: function(log, db) { + var op = log.op.toString(); + if (this.hist[op]) { + this.hist[op]++; + } else { + this.hist[op] = 1; + } + this.nops++; + }, + fault: function(log, db) { }, + result: function(ctx) { + return this.hist; + } + }`Also applies to: 337-337
28-34
: Refactor repeated RPC call and assertion code into helper functionsThere is repeated code for making RPC calls using
helpers.callRPCMethod
and asserting the response status and body. Consider abstracting this into helper functions to reduce duplication and improve maintainability.For example:
async function callAndAssert(method, params) { const response = await helpers.callRPCMethod(method, params) assert.equal(response.status, 200) assert.isDefined(response.body.result || response.body) return response.body.result || response.body }Then replace:
response = await helpers.callRPCMethod('debug_traceTransaction', [receipt.transactionHash, callTracer]) assert.equal(response.status, 200) assert.isDefined(response.body.result) txTrace = response.body.resultWith:
txTrace = await callAndAssert('debug_traceTransaction', [receipt.transactionHash, callTracer])Also applies to: 85-91, 111-117, 137-143, 151-157, 190-196, 223-229, 250-256, 278-284, 304-310, 323-329, 338-344, 396-402, 429-435, 443-449
🛑 Comments failed to post (17)
storage/mocks/mocks.go (2)
26-26:
⚠️ Potential issueFix timestamp implementation
Using
time.Now().Second()
for the timestamp is problematic because:
- It only captures seconds (0-59), which could lead to duplicate timestamps
- It doesn't provide sufficient granularity for blockchain timestamps
- It makes tests non-deterministic
Consider using a deterministic timestamp based on the block height:
- Timestamp: uint64(time.Now().Second()), + Timestamp: uint64(1600000000 + height), // Base timestamp + height for deterministic valuesCommittable suggestion skipped: line range outside the PR's diff.
24-31: 💡 Codebase verification
Based on the evidence gathered, I can now generate the final response:
Reindexing is required - Block structure changes affect PebbleDB storage
The changes to the Block structure will impact the persistent storage layer implemented in PebbleDB. The storage implementation in
storage/pebble/blocks.go
directly stores Block models, and any structural changes will affect the serialization format.
- The system uses PebbleDB for block storage (
storage/pebble/blocks.go
)- Blocks are persisted through the
Store
interface defined instorage/index.go
- Changes to the mock structure reflect changes in the actual Block model that is stored
Action items:
- A full reindex of the blockchain data will be necessary
- Implement a migration strategy for the PebbleDB storage
- Consider adding a version field to the block storage format to handle future schema changes
🔗 Analysis chain
Verify reindexing plan before merging
The changes to the Block structure in this mock file reflect significant modifications to the actual Block structure. As mentioned in the PR objectives, these changes will require reindexing of existing data.
Please ensure that:
- A reindexing plan is in place
- The impact on existing data has been assessed
- Downtime requirements have been communicated
The previous search didn't yield results because there might not be SQL files. Let's verify the database schema and block structure to understand the impact better.
I want to verify how blocks are persisted to understand the reindexing implications better.
We found the storage directory but need to understand its implementation to verify the reindexing implications.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for any existing indices or migrations that might be affected rg -l "CREATE INDEX|CREATE TABLE" --type sqlLength of output: 195
Script:
#!/bin/bash # Let's first check if there are any SQL files in the repository fd -e sql # Let's also check for any database schema or migration related files fd -g "*schema*" fd -g "*migration*" # Let's search for block structure definition in the codebase rg -A 5 "type Block struct"Length of output: 877
Script:
#!/bin/bash # Let's check the storage implementation to understand how blocks are stored rg -A 10 "interface.*Storage" # Also check for any persistence layer or database related code fd "storage|db|database" --type d # Let's see the actual block storage implementation rg -A 5 "func.*SaveBlock|func.*StoreBlock"Length of output: 121
Script:
#!/bin/bash # Let's examine the storage directory contents ls -R storage/ # Check the storage implementation files rg -l "Block" storage/ # Look for any persistence or database related code in the storage directory rg -A 5 "(Save|Store|Insert|Update).*Block" storage/Length of output: 5020
tests/web3js/eth_get_storage_at_test.js (2)
51-72: 🛠️ Refactor suggestion
Refactor storage validation logic and remove commented code.
The storage validation logic could be improved by:
- Extracting repeated decoding logic
- Using named constants
- Removing commented line
+ async function getDecodedStorage(address, slot, blockHeight) { + const stored = await web3.eth.getStorageAt(address, slot, blockHeight) + return web3.eth.abi.decodeParameter('uint256', stored) + } + let latestHeight = await web3.eth.getBlockNumber() // assert the storage slot on latest block - stored = await web3.eth.getStorageAt(contractAddress, slot, latestHeight) - value = web3.eth.abi.decodeParameter('uint256', stored) - assert.equal(value, 100n) + value = await getDecodedStorage(contractAddress, STORAGE_SLOT, latestHeight) + assert.equal(value, SECOND_TEST_VALUE) - // // assert the storage slot on previous block + // assert the storage slot on previous block - stored = await web3.eth.getStorageAt(contractAddress, slot, latestHeight - 1n) - value = web3.eth.abi.decodeParameter('uint256', stored) - assert.equal(value, 1337n) + value = await getDecodedStorage(contractAddress, STORAGE_SLOT, latestHeight - 1n) + assert.equal(value, INITIAL_TEST_VALUE) // assert the storage slot on block of contract deployment - stored = await web3.eth.getStorageAt(contractAddress, slot, deployed.receipt.blockNumber) - value = web3.eth.abi.decodeParameter('uint256', stored) - assert.equal(value, 1337n) + value = await getDecodedStorage(contractAddress, STORAGE_SLOT, deployed.receipt.blockNumber) + assert.equal(value, INITIAL_TEST_VALUE) // assert the storage slot on block prior to contract deployment - stored = await web3.eth.getStorageAt(contractAddress, slot, deployed.receipt.blockNumber - 1n) - value = web3.eth.abi.decodeParameter('uint256', stored) - assert.equal(value, 0n) + value = await getDecodedStorage(contractAddress, STORAGE_SLOT, deployed.receipt.blockNumber - 1n) + assert.equal(value, 0n, 'Storage should be empty before contract deployment')Committable suggestion skipped: line range outside the PR's diff.
38-50: 🛠️ Refactor suggestion
Refactor duplicated update logic.
The update logic is duplicated. Consider extracting it into a helper function and using named constants.
+ // Test values + const SECOND_TEST_VALUE = 100n + + async function updateStorageValue(contract, value) { + const updateData = contract.methods.store(value).encodeABI() + const res = await helpers.signAndSend({ + from: conf.eoa.address, + to: contract.options.address, + data: updateData, + value: '0', + gasPrice: conf.minGasPrice, + }) + assert.equal(res.receipt.status, conf.successStatus) + return res + } + // update the value on the contract - newValue = 100 - updateData = deployed.contract.methods.store(newValue).encodeABI() - // store a value in the contract - res = await helpers.signAndSend({ - from: conf.eoa.address, - to: contractAddress, - data: updateData, - value: '0', - gasPrice: conf.minGasPrice, - }) - assert.equal(res.receipt.status, conf.successStatus) + res = await updateStorageValue(deployed.contract, SECOND_TEST_VALUE)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Test values const SECOND_TEST_VALUE = 100n async function updateStorageValue(contract, value) { const updateData = contract.methods.store(value).encodeABI() const res = await helpers.signAndSend({ from: conf.eoa.address, to: contract.options.address, data: updateData, value: '0', gasPrice: conf.minGasPrice, }) assert.equal(res.receipt.status, conf.successStatus) return res } // update the value on the contract res = await updateStorageValue(deployed.contract, SECOND_TEST_VALUE)
tests/web3js/eth_revert_reason_test.js (1)
43-56: 🛠️ Refactor suggestion
Enhance the transaction receipt polling mechanism.
The current implementation could benefit from additional safeguards and optimizations:
Consider applying these improvements:
let txHash = response.body.result let rcp = null +const maxAttempts = 50 // 25 seconds timeout +let attempts = 0 + // wait until the transaction is executed & indexed, and its // receipt becomes available. -while (rcp == null) { +while (rcp == null && attempts < maxAttempts) { rcp = await helpers.callRPCMethod( 'eth_getTransactionReceipt', [txHash] ) if (rcp.body.result == null) { rcp = null + attempts++ + // Add delay between attempts to prevent aggressive polling + await new Promise(resolve => setTimeout(resolve, 500)) } } +if (rcp == null) { + throw new Error('Timeout waiting for transaction receipt') +}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.let txHash = response.body.result let rcp = null const maxAttempts = 50 // 25 seconds timeout let attempts = 0 // wait until the transaction is executed & indexed, and its // receipt becomes available. while (rcp == null && attempts < maxAttempts) { rcp = await helpers.callRPCMethod( 'eth_getTransactionReceipt', [txHash] ) if (rcp.body.result == null) { rcp = null attempts++ // Add delay between attempts to prevent aggressive polling await new Promise(resolve => setTimeout(resolve, 500)) } } if (rcp == null) { throw new Error('Timeout waiting for transaction receipt') }
tests/e2e_web3js_test.go (1)
36-39: 💡 Codebase verification
Based on the extensive context gathered, I can now provide a final response regarding the skipped transaction traces test.
Transaction tracing functionality is actively maintained and recently enhanced
The test is temporarily skipped while the transaction tracing system undergoes active development. Recent commits show:
Transaction tracing functionality is fully implemented in the codebase:
debug_traceTransaction
,debug_traceBlockByNumber
, anddebug_traceBlockByHash
endpoints are available- Support for multiple tracers (callTracer, prestateTracer, 4byteTracer, custom JavaScript tracers)
- Comprehensive test coverage in
debug_traces_test.js
Recent improvements (last 6 months):
- Error handling enhancements for transaction traces
- Extraction of common trace block logic
- Integration with metrics collection
- Storage interface optimizations
The skip message is outdated as the functionality appears to be fully implemented and actively maintained. Consider removing the
t.Skip()
and enabling the test.🔗 Analysis chain
Provide context for the re-addition of this test.
The skip message suggests this test existed before. Please clarify:
- Why was it removed initially?
- What conditions need to be met to add it back?
- Should this be tracked in a separate issue?
Also, consider moving this to a separate PR focused on transaction tracing functionality, as it appears unrelated to the current PR's objective of simplifying
FixedHash
usage.Let's check the history of this test:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search git history for this test git log -p --all -S "test transaction traces"Length of output: 116
Script:
#!/bin/bash # Let's try a broader search for any mentions of transaction traces in the codebase rg -i "transaction.?trace" -A 2 -B 2 # Also search for the test file name pattern rg "debug_traces_test" -A 2 -B 2Length of output: 1659
Script:
#!/bin/bash # Let's check the content of the debug_traces_test.js to understand the test functionality cat tests/web3js/debug_traces_test.js # Also check for any recent changes to transaction tracing functionality git log -p --since="6 months ago" -- api/debug.go storage/index.goLength of output: 61491
services/ingestion/engine.go (1)
54-54: 🛠️ Refactor suggestion
Add nil check for blocksProvider parameter.
The constructor should validate that the
blocksProvider
parameter is not nil, similar to how other critical dependencies might be validated.func NewEventIngestionEngine( subscriber EventSubscriber, blocksProvider *replayer.BlocksProvider, store *pebble.Storage, ... ) *Engine { + if blocksProvider == nil { + panic("blocksProvider cannot be nil") + } log = log.With().Str("component", "ingestion").Logger() ...Also applies to: 71-71
tests/helpers.go (1)
156-156: 💡 Codebase verification
The RateLimit increase in tests is inconsistent with production defaults
The increased RateLimit of 500 in tests is inconsistent with the documented and configured production default of 50 requests/second. This test configuration is 10x higher than what users would experience in production, which could lead to:
- False positives in tests that wouldn't work in production
- Missing rate-limiting related issues during testing
Specific findings:
- Production default is 50 req/s (cmd/run/cmd.go)
- Documented as 50 req/s in README.md
- Rate limiting is actively enforced in API endpoints (api/ratelimiter.go)
- Test configuration is set to 500 req/s (tests/helpers.go)
Please either:
- Adjust the test RateLimit to match production defaults
- Document why tests require a higher rate limit
- Consider using production defaults in integration tests to catch real-world issues
🔗 Analysis chain
Verify the 10x increase in RateLimit.
The RateLimit has been significantly increased from 50 to 500. While this aligns with changes in other test files, please verify:
- The reason for this specific increase
- Any potential impact on test stability
- Whether this is related to the re-indexing requirements
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for other rate limit configurations to ensure consistency echo "Searching for rate limit configurations..." rg -i "ratelimit.*50|ratelimit.*500" --type goLength of output: 339
Script:
#!/bin/bash # Let's check the git history to understand the context of this change git log -p --all -S "RateLimit.*50" --since="6 months ago" # Also check for any related test changes or discussions rg -i "test.*timeout|rate.*limit" --type go -A 3 -B 3Length of output: 60
Script:
#!/bin/bash # Let's check the current configuration and usage of RateLimit in the codebase echo "Checking RateLimit usage in code..." rg -i "ratelimit" --type go -A 3 -B 3 # Also check for any comments or documentation about rate limiting echo "Checking for rate limit documentation..." rg -i "rate.?limit" README.md docs/ 2>/dev/nullLength of output: 13006
services/requester/requester.go (1)
85-85: 🛠️ Refactor suggestion
TODO comment suggests changing
evmHeight
type fromint64
touint64
The TODO comment correctly identifies that block heights should be unsigned integers since they cannot be negative. This aligns with blockchain conventions where block heights are always non-negative.
Consider implementing this TODO by:
- Changing all method signatures to use
uint64
instead ofint64
- Updating the implementation to handle the type change
- Updating all callers of these methods
This change would:
- Prevent potential issues with negative block heights
- Better represent the domain concept
- Improve type safety
services/replayer/blocks_provider.go (3)
79-85:
⚠️ Potential issueProtect
latestBlock
with mutex for thread safetyThe
OnBlockReceived
method updatesbp.latestBlock
without synchronization. IfBlocksProvider
is accessed by multiple goroutines, this can lead to race conditions.Consider adding a mutex to protect access to
bp.latestBlock
, ensuring that reads and writes are thread-safe.
92-112:
⚠️ Potential issueEnsure thread-safe access to
latestBlock
inGetSnapshotAt
The
GetSnapshotAt
method readsbp.latestBlock
without any synchronization mechanism. Concurrent access might cause data races iflatestBlock
is modified byOnBlockReceived
.Implement synchronization, such as using a mutex, to guard reads and writes to
bp.latestBlock
.
29-51: 🛠️ Refactor suggestion
Improve error handling in
GetHashFunc
to avoid silent failuresIn the
GetHashFunc
, errors fromGetByHeight
andblock.Hash()
are ignored, returning an empty hash. This can obscure underlying issues and make debugging difficult.Consider logging the errors or propagating them for proper handling. For example:
if err != nil { - return gethCommon.Hash{} + // Log or handle the error appropriately + // return gethCommon.Hash{} + return gethCommon.Hash{} // Or consider propagating the error }Committable suggestion skipped: line range outside the PR's diff.
tests/web3js/debug_traces_test.js (2)
37-45: 🛠️ Refactor suggestion
Replace hardcoded addresses with variables
Using variables like
conf.eoa.address
andcontractAddress
instead of hardcoded addresses improves maintainability and flexibility, ensuring the tests remain valid even if the addresses change.Apply this diff to replace hardcoded addresses:
- assert.equal(txTrace.from, '0xfacf71692421039876a5bb4f10ef7a439d8ef61e') + assert.equal(txTrace.from, conf.eoa.address) - assert.equal(txTrace.to, '0x99a64c993965f8d69f985b5171bc20065cc32fab') + assert.equal(txTrace.to, contractAddress)Also applies to: 94-103, 164-212
41-42:
⚠️ Potential issueCorrect the usage of
assert.lengthOf
with BigInt valuesThe
assert.lengthOf
function expects a number, but9856n
and9806n
are BigInt literals. Passing BigInts may cause errors or unexpected behavior. Convert BigInts to numbers or use numeric literals without then
suffix.Apply this diff to fix the issue:
- assert.lengthOf(txTrace.input, 9856n) - assert.lengthOf(txTrace.output, 9806n) + assert.lengthOf(txTrace.input, 9856) + assert.lengthOf(txTrace.output, 9806)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.assert.lengthOf(txTrace.input, 9856) assert.lengthOf(txTrace.output, 9806)
api/api.go (3)
288-288:
⚠️ Potential issuePotential integer overflow when casting
evmHeight
toint64
Casting
evmHeight
(typeuint64
) toint64
may cause incorrect values or overflow ifevmHeight
exceedsmath.MaxInt64
. This could lead to unexpected behavior whenevmHeight
is larger than the maximum value forint64
. Consider updating theGetBalance
method in theevm
interface to accept auint64
parameter, or ensure thatevmHeight
does not exceedmath.MaxInt64
.
1089-1113:
⚠️ Potential issueInconsistent handling of block numbers between
int64
anduint64
The
getBlockNumber
method now returnsuint64
, but it processes negative block numbers representing special cases (e.g.,LatestBlockNumber
,PendingBlockNumber
). Casting negativeint64
values touint64
can result in unintended large positive numbers. It's important to handle negative block numbers properly to avoid incorrect block heights. Consider maintaining the return type asint64
or adjusting the handling logic to manage special cases without risky type casting.
1119-1144:
⚠️ Potential issuePotential incorrect type conversion in
resolveBlockNumber
functionIn
resolveBlockNumber
,height
is anint64
which may be negative due to special block numbers. Casting a negativeint64
touint64
can produce incorrect results. Ensure thatheight
is non-negative before casting, or change the function to consistently work withint64
to prevent such issues.
Description
It is safer/simpler to use the
gethCommon.Hash
type forFixedHash
, instead of a*string
.All we have to do is check that
FixedHash
is not the empty hash.For contributor use:
master
branchFiles changed
in the Github PR explorerSummary by CodeRabbit
Summary by CodeRabbit
New Features
Bug Fixes