Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify usage of FixedHash field on Block type #520

Merged

Conversation

m-Peter
Copy link
Collaborator

@m-Peter m-Peter commented Sep 5, 2024

Description

It is safer/simpler to use the gethCommon.Hash type for FixedHash, instead of a *string.
All we have to do is check that FixedHash is not the empty hash.


For contributor use:

  • Targeted PR against master branch
  • Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
  • Code follows the standards mentioned here.
  • Updated relevant documentation
  • Re-reviewed Files changed in the Github PR explorer
  • Added appropriate labels

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Improved handling of block hashes, enhancing performance and clarity.
    • Updated block size assertions in tests to reflect changes in blockchain data.
    • Enhanced event handling capabilities by allowing storage and retrieval of additional payload information related to block and transaction events.
    • Introduced functionality for managing and providing access to blockchain block snapshots within the replayer service.
  • Bug Fixes

    • Enhanced validation of block hashes in test functions to ensure accuracy and reliability.
    • Streamlined error handling and configuration parsing logic for improved reliability.

@m-Peter m-Peter added this to the Flow-EVM-M2 milestone Sep 5, 2024
@m-Peter m-Peter self-assigned this Sep 5, 2024
Copy link
Contributor

coderabbitai bot commented Sep 5, 2024

Walkthrough

The changes involve updates to the Block structure in models/block.go, specifically altering the FixedHash field from a pointer to a string to a value of type gethCommon.Hash. This simplifies hash handling and improves method logic. Corresponding test files were updated to ensure proper validation of block hashes, and an adjustment was made to the expected block size in a JavaScript test file.

Changes

Files Change Summary
models/block.go, models/block_test.go Modified Block structure to change FixedHash from *string to gethCommon.Hash; updated related methods and tests for hash validation.
tests/web3js/eth_non_interactive_test.js Updated expected value for block.size from 3995n to 4028n in a test case.
Makefile Removed command for generating mocks in the services/traces directory.
README.md Enhanced clarity and detail regarding the EVM Gateway's functionality and usage.
api/api.go Changed return type of getBlockNumber from int64 to uint64 and updated related methods.
bootstrap/bootstrap.go Removed trace-related functionality and added event tracing capabilities.
cmd/run/cmd.go Simplified error handling for gas price parsing and removed tracing configuration flags.
config/config.go Removed several fields related to tracing configurations.
go.mod Adjusted dependency management, moving cloud.google.com/go/storage to indirect requirements.
models/events.go, models/events_test.go Added new fields and methods to handle block and transaction event payloads.
services/ingestion/engine.go Added blocksProvider field to Engine struct and modified event processing logic.
services/ingestion/event_subscriber.go Updated EventSubscriber interface to remove height parameter from Subscribe method.
services/replayer/blocks_provider.go, blocks_provider_test.go Introduced new functionality for managing block snapshots and corresponding tests.

Sequence Diagram(s)

sequenceDiagram
    participant Block as Block
    participant Hash as Hash
    participant Test as Test

    Block->>Hash: Set FixedHash to gethCommon.Hash
    Hash-->>Block: Return hash value
    Test->>Block: Validate block hash
    Test-->>Test: Assert hash matches expected value
Loading

🐰 In the meadow where bunnies play,
A change was made, hip-hip-hooray!
No more pointers, just a hash,
Simplified logic in a flash!
With tests updated, all is bright,
Hopping forward, what a delight! 🌼✨

Possibly related PRs

  • Improve API error handling #370: The changes in the main PR regarding the decodeBlockEvent function and its return values are related to the modifications in the error handling and logging mechanisms in the API, which also involve handling block events.
  • Check data integrity for EVM events #529: The updates to the decodeCadenceEvents function in this PR enhance the validation of EVM block events, which aligns with the changes made to the decodeBlockEvent function in the main PR.
  • Event recovery from a missing block #554: The introduction of new error types related to missing blocks in this PR complements the changes in the main PR that enhance error handling in block event processing.
  • Aggregate and expose EVM-related event payloads #628: The modifications to the decodeBlockEvent function in the main PR, which now returns an additional payload, are directly related to the changes made in this PR that aggregate and expose EVM-related event payloads.
  • Refactor event subscriber #636: The refactor of the event subscriber in this PR, which includes changes to how events are processed, is relevant to the updates made in the main PR regarding event handling and decoding.
  • Remove traces downloader #639: The removal of the traces downloader in this PR indicates a shift in how event data is managed, which is relevant to the changes in the main PR that streamline event processing and error handling.

Suggested labels

Bugfix

Suggested reviewers

  • peterargue
  • zhangchiqing
  • sideninja

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Outside diff range, codebase verification and nitpick comments (2)
models/block.go (2)

49-50: Clarification on FixedHash field documentation.

The comment above the FixedHash field explains the purpose of this field well, ensuring that the hash calculated with the previous format is preserved. However, it might be beneficial to clarify further why this specific implementation choice (using gethCommon.Hash instead of a pointer) improves safety and simplicity, as mentioned in the PR summary.


121-121: Ensure consistency in hash assignment during legacy block decoding.

The assignment of FixedHash directly from the block.Hash in the decodeLegacyBlockEvent function is a good use of the new FixedHash type. This ensures that the hash is preserved accurately during the decoding of legacy events. It would be beneficial to add a comment here explaining that this change is part of the transition to the new hash handling method.

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 808db37 and d4282c7.

Files selected for processing (3)
  • models/block.go (9 hunks)
  • models/block_test.go (4 hunks)
  • tests/web3js/eth_non_interactive_test.js (1 hunks)
Additional comments not posted (5)
models/block.go (2)

59-60: Approval of the updated hash check logic.

The updated logic in the Hash() method to check if FixedHash is not the empty hash (gethCommon.HexToHash("0x0")) is a clear improvement over the previous need to check for nil and an empty string. This change simplifies the code and reduces potential errors related to nil dereferencing.


175-175: Validate the new FixedHash assignment in decodeBlockBreakingChanges.

The assignment of FixedHash to blockHash in the decodeBlockBreakingChanges function is a crucial part of ensuring that blocks decoded from older formats retain their original hash. This change aligns with the PR's goal of simplifying hash handling. Ensure that this change is thoroughly tested, especially since it involves historical data and could impact data integrity during re-indexing.

Verification successful

The FixedHash assignment is properly tested in decodeBlockBreakingChanges. The Test_FixedHashBlock function in models/block_test.go ensures that the FixedHash is retained correctly during encoding and decoding, addressing the review comment's concern about data integrity.

  • models/block_test.go: Test_FixedHashBlock verifies the FixedHash functionality.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the `FixedHash` assignment in `decodeBlockBreakingChanges` retains the original hash.

# Test: Search for the function usage. Expect: Only occurrences of the new signature.
rg --type go -A 5 $'decodeBlockBreakingChanges'

Length of output: 763


Script:

#!/bin/bash
# Search for test files or functions that involve `decodeBlockBreakingChanges` or `FixedHash`.
rg --type go 'decodeBlockBreakingChanges|FixedHash' --glob '*_test.go'

Length of output: 175


Script:

#!/bin/bash
# Extract and review the `Test_FixedHashBlock` function from `models/block_test.go`.
rg --type go -A 20 'func Test_FixedHashBlock' models/block_test.go

Length of output: 513

models/block_test.go (2)

35-42: Improved hash comparison in tests.

The update to use gethCommon.HexToHash for direct hash comparison instead of converting to strings enhances type safety and clarity in the test assertions.


77-77: Enhanced handling and testing of FixedHash.

The changes to directly use gethCommon.Hash for FixedHash and update the assertions accordingly improve the clarity and efficiency of the tests. This aligns well with the PR's objectives to simplify and enhance safety in hash handling.

Also applies to: 82-82, 91-91, 102-102

tests/web3js/eth_non_interactive_test.js (1)

28-28: Verify the updated block size expectation.

The change in the expected block size from 3995n to 4028n needs verification to ensure it aligns with the updated blockchain logic or data changes. It's crucial to confirm that this new expected size is accurate based on the modifications made to the blockchain's behavior.

Run the following script to verify the block size calculation:

models/block.go Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between d4282c7 and 65ec134.

Files selected for processing (3)
  • models/block.go (9 hunks)
  • models/block_test.go (4 hunks)
  • tests/web3js/eth_non_interactive_test.js (1 hunks)
Files skipped from review as they are similar to previous changes (3)
  • models/block.go
  • models/block_test.go
  • tests/web3js/eth_non_interactive_test.js

@sideninja
Copy link
Contributor

Yeah this is nicer, I did have this approach but with a pointer, I should remove the pointer. I think this is good but I wouldn't do it unless we plan a reindex anyway. So we can keep this in draft until then and add a comment in the description
DO NOT MERGE UNLESS REINDEX

@m-Peter m-Peter marked this pull request as draft September 5, 2024 11:32
@m-Peter m-Peter marked this pull request as ready for review November 4, 2024 10:44
@m-Peter m-Peter removed the request for review from sideninja November 4, 2024 10:48
coderabbitai[bot]

This comment was marked as outdated.

@m-Peter m-Peter changed the base branch from main to feature/local-tx-reexecution November 4, 2024 10:52
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

Actionable comments posted: 17

🧹 Outside diff range and nitpick comments (38)
models/receipt_test.go (1)

13-13: Consider adding test coverage for the new return value.

The change correctly handles the updated decodeTransactionEvent function signature by capturing the new *events.TransactionEventPayload return value. However, since we're discarding this value with _, consider adding test cases to validate this new return value's correctness.

storage/mocks/mocks.go (1)

30-30: Consider making TotalGasUsed configurable

The hardcoded value of 30,000 for TotalGasUsed might not be suitable for all test scenarios.

Consider making it a parameter of the NewBlock function:

-func NewBlock(height uint64) *models.Block {
+func NewBlock(height uint64, totalGasUsed ...uint64) *models.Block {
     // ...
+    gasUsed := uint64(30_000) // default value
+    if len(totalGasUsed) > 0 {
+        gasUsed = totalGasUsed[0]
+    }
     // ...
-    TotalGasUsed:        uint64(30_000),
+    TotalGasUsed:        gasUsed,
tests/web3js/eth_get_storage_at_test.js (4)

6-11: Add descriptive test name and error handling.

The test setup could be improved by:

  1. Adding a more descriptive test name that explains the test scenarios
  2. Adding error handling for the contract deployment
-it('should retrieve storage slots of contracts', async () => {
+it('should correctly retrieve storage slots at different block heights', async () => {
     let deployed = await helpers.deployContract('storage')
+    assert.ok(deployed, 'Contract deployment failed')
     let contractAddress = deployed.receipt.contractAddress
+    assert.ok(contractAddress, 'Contract address is undefined')

     // make sure deploy results are correct
     assert.equal(deployed.receipt.status, conf.successStatus)

13-20: Enhance storage slot validation and documentation.

The initial storage validation could be improved by:

  1. Better documenting the storage layout
  2. Validating the initial value is actually 0
     // get the default deployed value on contract
     let callRetrieve = await deployed.contract.methods.retrieve().encodeABI()
     let result = await web3.eth.call({ to: contractAddress, data: callRetrieve }, 'latest')

-    let slot = 0 // The slot for the 'number' variable
+    // Storage layout:
+    // slot 0: uint256 number (main storage variable)
+    const STORAGE_SLOT = 0
-    let stored = await web3.eth.getStorageAt(contractAddress, slot, 'latest')
+    let stored = await web3.eth.getStorageAt(contractAddress, STORAGE_SLOT, 'latest')
     assert.equal(stored, result)
+    // Verify initial value is 0
+    let value = web3.eth.abi.decodeParameter('uint256', stored)
+    assert.equal(value, 0n, 'Initial storage value should be 0')

21-37: Improve value update readability and gas handling.

Consider the following improvements:

  1. Use named constants for test values
  2. Add gas estimation validation
+    // Test values
+    const INITIAL_TEST_VALUE = 1337n
+
     // set the value on the contract, to its current value
-    let initValue = 1337
+    let initValue = INITIAL_TEST_VALUE
     let updateData = deployed.contract.methods.store(initValue).encodeABI()
+    
+    // Estimate gas to ensure transaction won't fail
+    const gasEstimate = await web3.eth.estimateGas({
+        from: conf.eoa.address,
+        to: contractAddress,
+        data: updateData
+    })
+    assert.ok(gasEstimate > 0, 'Gas estimation failed')

1-72: Overall assessment: Good test coverage with room for improvement.

The test thoroughly validates storage slot retrieval across different block heights, which is crucial for the PR's objective of migrating the FixedHash field. The test structure is good, but could be enhanced by:

  1. Adding more error cases (e.g., invalid slots, non-existent contracts)
  2. Testing with larger values to ensure proper handling of different storage sizes
  3. Adding comments explaining the relationship between block heights and expected values

These improvements would make the test more robust and maintainable.

tests/web3js/eth_revert_reason_test.js (1)

80-93: Extract duplicated polling logic into a helper function.

The polling logic is duplicated. Consider extracting it into a reusable helper function to improve maintainability and ensure consistent behavior.

Create a helper function in helpers.js:

// Add to helpers.js
async function waitForTransactionReceipt(txHash, maxAttempts = 50) {
    let rcp = null;
    let attempts = 0;
    
    while (rcp == null && attempts < maxAttempts) {
        rcp = await helpers.callRPCMethod(
            'eth_getTransactionReceipt',
            [txHash]
        );
        if (rcp.body.result == null) {
            rcp = null;
            attempts++;
            await new Promise(resolve => setTimeout(resolve, 500));
        }
    }
    
    if (rcp == null) {
        throw new Error('Timeout waiting for transaction receipt');
    }
    
    return rcp;
}

Then simplify both test cases:

-let rcp = null
-// wait until the transaction is executed & indexed, and its
-// receipt becomes available.
-while (rcp == null) {
-    rcp = await helpers.callRPCMethod(
-        'eth_getTransactionReceipt',
-        [txHash]
-    )
-    if (rcp.body.result == null) {
-        rcp = null
-    }
-}
+let rcp = await helpers.waitForTransactionReceipt(txHash);
tests/web3js/verify_cadence_arch_calls_test.js (2)

14-32: Consider parameterizing the height value.

The hardcoded height value 120 in verifyArchCallToRandomSource might make the test brittle. Consider making it configurable or relative to the current block height.

-    let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(120).encodeABI()
+    const testHeight = latest - 10 // or from config
+    let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(testHeight).encodeABI()

70-92: Document test data and expected outcomes.

The test uses hardcoded values without explaining their significance:

  • The purpose of the specific hash 0x1bacdb...
  • The meaning of the encoded proof data
  • Why false is the expected result

Consider adding comments to explain these values and their expected behavior.

+    // Example hash representing an invalid/test ownership proof
     let verifyCOAOwnershipProofData = deployed.contract.methods.verifyArchCallToVerifyCOAOwnershipProof(
         tx.to,
         '0x1bacdb569847f31ade07e83d6bb7cefba2b9290b35d5c2964663215e73519cff',
         web3.utils.hexToBytes('f853c18088f8d6e0586b0a20c78365766df842b840b90448f4591df2639873be2914c5560149318b7e2fcf160f7bb8ed13cfd97be2f54e6889606f18e50b2c37308386f840e03a9fff915f57b2164cba27f0206a95')
     ).encodeABI()

Also, consider extracting test data to a configuration file for better maintainability.

models/block.go (2)

67-67: Consider enhancing error context.

The function changes look good, but consider including the event type in the error message for better debugging.

-			"failed to Cadence-decode EVM block event [%s]: %w",
+			"failed to Cadence-decode EVM block event [type=%s, event=%s]: %w",
+			event.Type(),
 			event.String(),
 			err,

Also applies to: 70-70, 77-77, 82-82, 97-97


49-50: Plan the migration strategy carefully.

The type change from *string to gethCommon.Hash improves the architecture but requires careful migration planning:

  1. Consider adding a version field to the block structure to help with future migrations
  2. Document the migration process for operators
  3. Add metrics to track migration progress
  4. Consider providing a rollback strategy

Also applies to: 59-60, 155-155

tests/e2e_web3js_test.go (1)

Line range hint 31-71: Consider scope of test additions.

While the test additions are valuable, they seem to extend beyond the PR's objective of simplifying FixedHash field usage. Suggestions:

  1. Move the skipped tests (verify Cadence arch calls and test transaction traces) to separate PRs to maintain focused changes
  2. Add explicit test cases in eth_get_storage_at_test to verify the behavior with the new FixedHash field implementation
models/events.go (1)

39-44: Consider documenting re-indexing requirements.

These changes to event handling, while well-implemented, are part of a larger change that requires re-indexing. Consider adding a comment in the code or updating documentation to note this requirement for future maintainers.

Also applies to: 116-122, 127-133, 169-173, 181-186

services/ingestion/engine.go (1)

39-39: Consider documenting the blocksProvider field.

The blocksProvider field should be documented in the struct's comment block to explain its purpose and relationship with block processing.

services/replayer/blocks_provider_test.go (4)

55-67: Fix duplicate test case name.

The test case name "with new block non-sequential to latest block" is duplicated from the previous test case, but this test actually verifies sequential block processing (10 followed by 11).

Consider renaming to better reflect the test's purpose:

-	t.Run("with new block non-sequential to latest block", func(t *testing.T) {
+	t.Run("with new block sequential to latest block", func(t *testing.T) {

69-98: Consider adding more test scenarios for BlockContext.

While the current test thoroughly validates all context properties, consider adding test cases for:

  • Historic blocks
  • Edge cases (e.g., block 0)
  • Invalid block heights

99-198: Consider extracting the magic number 256 into a named constant.

The value 256 appears multiple times in the test and seems to be a significant boundary for block height ranges. Consider extracting it into a named constant to improve readability and maintainability.

+const blockHeightRange = 256

282-291: Consider making tracer configuration more flexible.

The tracer configuration is currently hardcoded. Consider making it configurable through parameters to support different test scenarios.

-func newCallTracer(t *testing.T) *tracers.Tracer {
+func newCallTracer(t *testing.T, config json.RawMessage) *tracers.Tracer {
+	if config == nil {
+		config = json.RawMessage(`{ "onlyTopCall": true }`)
+	}
 	tracer, err := tracers.DefaultDirectory.New(
 		"callTracer",
 		&tracers.Context{},
-		json.RawMessage(`{ "onlyTopCall": true }`),
+		config,
 	)
 	require.NoError(t, err)
 	return tracer
 }
models/events_test.go (2)

Line range hint 197-224: LGTM! Consider enhancing test readability.

The block hash validation logic correctly tests the new gethCommon.Hash implementation. The test properly ensures that the block event payload contains the correct block hash and height.

Consider adding a descriptive comment before the block event payload assertions to improve test readability:

 // assert we have collected the EVM.BlockExecuted event payload
 blockEventPayload := cdcEvents.BlockEventPayload()
+// Verify that the block hash in the event payload matches the expected hash
 blockHash, err := block.Hash()
 require.NoError(t, err)
 assert.Equal(t, blockHash, blockEventPayload.Hash)

233-237: LGTM! Consider enhancing test documentation.

The transaction event payload validation correctly ensures consistency between block and transaction events, verifying both hash matching and block height correlation.

Consider expanding the comment to better describe the validation being performed:

-// assert we have collected the EVM.TransactionExecuted event payloads
-// in their correct order.
+// Verify that transaction event payloads:
+// 1. Are collected in the correct order
+// 2. Have matching transaction hashes
+// 3. Reference the correct block height from the block event
 txEventPayload := cdcEvents.TxEventPayloads()[i]
 assert.Equal(t, tx.Hash(), txEventPayload.Hash)
 assert.Equal(t, blockEventPayload.Height, txEventPayload.BlockHeight)
services/requester/requester.go (2)

Line range hint 492-502: Refactor evmToCadenceHeight to use uint64 instead of int64

The evmToCadenceHeight method currently accepts int64 and has special handling for negative values. This could be simplified by changing the parameter type to uint64 and removing the negative check.

-func (e *EVM) evmToCadenceHeight(height int64) (uint64, error) {
-    if height < 0 {
-        return LatestBlockHeight, nil
-    }
-
-    evmHeight := uint64(height)
+func (e *EVM) evmToCadenceHeight(height uint64) (uint64, error) {
+    evmHeight := height

Line range hint 87-122: Document the special handling of block heights in the interface

The interface implementation treats negative block heights as a request for the latest block, but this behavior is not documented in the interface comments. This implicit behavior should be explicitly documented.

Add interface-level documentation explaining the block height handling:

 // TODO(janezp): Requester does need to know about special EVM block heights. evmHeight should be uint64.
 type Requester interface {
+    // Block height handling:
+    // - Negative values: Latest block height will be used
+    // - Zero or positive values: Specific block height will be used
services/replayer/blocks_provider.go (2)

79-85: Clarify error message in OnBlockReceived for better understanding

The current error message may be unclear: "received new block: %d, non-sequential of latest block: %d".

Consider rephrasing the error message for clarity:

 return fmt.Errorf(
-    "received new block: %d, non-sequential of latest block: %d",
+    "received block %d, which is not sequential to the latest block %d",
     block.Height,
     bp.latestBlock.Height,
 )

92-112: Consider caching blocks to enhance performance

If GetSnapshotAt is frequently called for the same heights, implementing caching can reduce database access and improve performance.

services/ingestion/event_subscriber.go (9)

Line range hint 37-93: Potential Data Race on r.height in Concurrent Access

The height field in RPCEventSubscriber is accessed and modified within goroutines without synchronization. In the Subscribe method, r.height is updated during backfilling (lines 88 and 93) while being read elsewhere. This can lead to data races if Subscribe is called concurrently or if multiple goroutines access r.height.

Consider adding synchronization mechanisms, such as mutex locks, to protect access to r.height or ensure that Subscribe is not called concurrently.


Line range hint 138-150: Unprotected Access to r.recovery and r.recoveredEvents

The fields r.recovery and r.recoveredEvents are accessed and modified within goroutines without proper synchronization. In methods like subscribe, recover, and accumulateEventsMissingBlock, these fields are read and written from multiple goroutines, potentially causing data races.

Recommend adding synchronization (e.g., mutexes) to protect access to r.recovery and r.recoveredEvents, ensuring thread-safe operations.

Also applies to: 294-307


339-359: Function Naming Consistency: Rename blocksFilter to blockFilter

The function blocksFilter is named inconsistently with the comment that refers to it as blockFilter. For clarity and consistency, it would be better to unify the naming.

Consider renaming the function to blockFilter to match the comment and improve readability.

Apply this diff:

-func blocksFilter(chainId flowGo.ChainID) flow.EventFilter {
+func blockFilter(chainId flowGo.ChainID) flow.EventFilter {

And update all references accordingly.


126-131: Improve Error Message in SubscribeEventsByBlockHeight

The error message when failing to subscribe lacks sufficient context.

Enhance the error message for clarity:

if err != nil {
    eventsChan <- models.NewBlockEventsError(
-       fmt.Errorf("failed to subscribe to events by block height: %d, with: %w", height, err),
+       fmt.Errorf("failed to subscribe to events starting at block height %d: %w", height, err),
    )
    return eventsChan
}

Line range hint 295-301: Data Race When Appending to r.recoveredEvents

Appending to r.recoveredEvents without synchronization can cause data races since slices are not thread-safe.

Implement synchronization when modifying r.recoveredEvents or use thread-safe data structures like sync.Mutex to protect access.


339-359: Optimize Event Type ID Calculations

The event type IDs in blocksFilter are recalculated every time the function is called.

Consider caching these IDs if blocksFilter is called frequently to improve performance.

var (
    blockExecutedEventID      string
    transactionExecutedEventID string
)

func init() {
    // Initialize the event type IDs once
}

func blocksFilter(chainId flowGo.ChainID) flow.EventFilter {
    // Use the cached event type IDs
}

50-55: Initialize recovery and recoveredEvents Fields

In NewRPCEventSubscriber, the fields recovery and recoveredEvents are not explicitly initialized.

Initialize these fields for clarity, even though they default to zero values.

return &RPCEventSubscriber{
    logger: logger,
    client: client,
    chain:  chainID,
    height: startHeight,
+   recovery: false,
+   recoveredEvents: nil,
}

27-27: Update Interface Documentation After Method Signature Change

The Subscribe method in the EventSubscriber interface no longer accepts a height parameter, but the documentation still references it.

Update the method comment to reflect the new signature.

// Subscribe to EVM events ~from the provided height~, and return a channel with the events.
//
// The BlockEvents type will contain an optional error in case
// the error happens, the consumer of the channel should handle it.

Line range hint 273-275: Simplify Error Handling in fetchMissingData

The error message could be more informative by specifying the expected and actual counts.

Modify the error message for clarity:

if len(recoveredEvents) != 1 {
    return models.NewBlockEventsError(
        fmt.Errorf(
-           "received %d but expected 1 event for height %d",
+           "expected 1 event for height %d but received %d",
            len(recoveredEvents),
            blockEvents.Height,
        ),
    )
}
bootstrap/bootstrap.go (1)

137-144: Enhance error context when initializing the tracer.

Currently, the error returned when initializing the tracer lacks context. Wrapping the error message will provide more clarity on where the failure occurred.

Apply this diff to wrap the error with additional context:

 tracer, err := tracers.DefaultDirectory.New(
     callTracerName,
     &tracers.Context{},
     json.RawMessage(callTracerConfig),
 )
 if err != nil {
-    return err
+    return fmt.Errorf("failed to initialize tracer: %w", err)
 }
tests/web3js/debug_traces_test.js (4)

16-229: Consider splitting the long test case into smaller, focused tests

The test function should retrieve transaction traces spans over 200 lines and covers multiple scenarios, including testing different tracers and transaction states. Splitting it into smaller, specific test cases improves readability and maintainability.


255-256: Ensure consistent validation of response bodies

In some places, you're checking assert.isDefined(response.body.result), while in others you check assert.isDefined(response.body). For consistency and to ensure accurate validations, align the assertions based on the expected structure of the response.

For example:

// If the expected property is `response.body.result`, update assertions accordingly
- assert.isDefined(response.body)
+ assert.isDefined(response.body.result)

Also applies to: 282-283, 309-310, 327-328, 401-402, 434-435, 448-449


46-46: Improve readability of the inline JavaScript tracer code

The JavaScript tracer code is written as a single-line string, which is hard to read and maintain. Defining it as a multi-line template string enhances readability.

Apply this diff to enhance readability:

- let jsTracer = '{hist: {}, nops: 0, step: function(log, db) { var op = log.op.toString(); if (this.hist[op]){ this.hist[op]++; } else { this.hist[op] = 1; } this.nops++; }, fault: function(log, db) {}, result: function(ctx) { return this.hist; }}'
+ let jsTracer = `
+ {
+   hist: {},
+   nops: 0,
+   step: function(log, db) {
+     var op = log.op.toString();
+     if (this.hist[op]) {
+       this.hist[op]++;
+     } else {
+       this.hist[op] = 1;
+     }
+     this.nops++;
+   },
+   fault: function(log, db) { },
+   result: function(ctx) {
+     return this.hist;
+   }
+ }`

Also applies to: 337-337


28-34: Refactor repeated RPC call and assertion code into helper functions

There is repeated code for making RPC calls using helpers.callRPCMethod and asserting the response status and body. Consider abstracting this into helper functions to reduce duplication and improve maintainability.

For example:

async function callAndAssert(method, params) {
  const response = await helpers.callRPCMethod(method, params)
  assert.equal(response.status, 200)
  assert.isDefined(response.body.result || response.body)
  return response.body.result || response.body
}

Then replace:

response = await helpers.callRPCMethod('debug_traceTransaction', [receipt.transactionHash, callTracer])
assert.equal(response.status, 200)
assert.isDefined(response.body.result)
txTrace = response.body.result

With:

txTrace = await callAndAssert('debug_traceTransaction', [receipt.transactionHash, callTracer])

Also applies to: 85-91, 111-117, 137-143, 151-157, 190-196, 223-229, 250-256, 278-284, 304-310, 323-329, 338-344, 396-402, 429-435, 443-449

🛑 Comments failed to post (17)
storage/mocks/mocks.go (2)

26-26: ⚠️ Potential issue

Fix timestamp implementation

Using time.Now().Second() for the timestamp is problematic because:

  1. It only captures seconds (0-59), which could lead to duplicate timestamps
  2. It doesn't provide sufficient granularity for blockchain timestamps
  3. It makes tests non-deterministic

Consider using a deterministic timestamp based on the block height:

-			Timestamp:           uint64(time.Now().Second()),
+			Timestamp:           uint64(1600000000 + height), // Base timestamp + height for deterministic values

Committable suggestion skipped: line range outside the PR's diff.


24-31: 💡 Codebase verification

Based on the evidence gathered, I can now generate the final response:

Reindexing is required - Block structure changes affect PebbleDB storage

The changes to the Block structure will impact the persistent storage layer implemented in PebbleDB. The storage implementation in storage/pebble/blocks.go directly stores Block models, and any structural changes will affect the serialization format.

  • The system uses PebbleDB for block storage (storage/pebble/blocks.go)
  • Blocks are persisted through the Store interface defined in storage/index.go
  • Changes to the mock structure reflect changes in the actual Block model that is stored

Action items:

  1. A full reindex of the blockchain data will be necessary
  2. Implement a migration strategy for the PebbleDB storage
  3. Consider adding a version field to the block storage format to handle future schema changes
🔗 Analysis chain

Verify reindexing plan before merging

The changes to the Block structure in this mock file reflect significant modifications to the actual Block structure. As mentioned in the PR objectives, these changes will require reindexing of existing data.

Please ensure that:

  1. A reindexing plan is in place
  2. The impact on existing data has been assessed
  3. Downtime requirements have been communicated

The previous search didn't yield results because there might not be SQL files. Let's verify the database schema and block structure to understand the impact better.


I want to verify how blocks are persisted to understand the reindexing implications better.


We found the storage directory but need to understand its implementation to verify the reindexing implications.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for any existing indices or migrations that might be affected
rg -l "CREATE INDEX|CREATE TABLE" --type sql

Length of output: 195


Script:

#!/bin/bash
# Let's first check if there are any SQL files in the repository
fd -e sql

# Let's also check for any database schema or migration related files
fd -g "*schema*"
fd -g "*migration*"

# Let's search for block structure definition in the codebase
rg -A 5 "type Block struct"

Length of output: 877


Script:

#!/bin/bash
# Let's check the storage implementation to understand how blocks are stored
rg -A 10 "interface.*Storage"

# Also check for any persistence layer or database related code
fd "storage|db|database" --type d

# Let's see the actual block storage implementation
rg -A 5 "func.*SaveBlock|func.*StoreBlock"

Length of output: 121


Script:

#!/bin/bash
# Let's examine the storage directory contents
ls -R storage/

# Check the storage implementation files
rg -l "Block" storage/

# Look for any persistence or database related code in the storage directory
rg -A 5 "(Save|Store|Insert|Update).*Block" storage/

Length of output: 5020

tests/web3js/eth_get_storage_at_test.js (2)

51-72: 🛠️ Refactor suggestion

Refactor storage validation logic and remove commented code.

The storage validation logic could be improved by:

  1. Extracting repeated decoding logic
  2. Using named constants
  3. Removing commented line
+    async function getDecodedStorage(address, slot, blockHeight) {
+        const stored = await web3.eth.getStorageAt(address, slot, blockHeight)
+        return web3.eth.abi.decodeParameter('uint256', stored)
+    }
+
     let latestHeight = await web3.eth.getBlockNumber()

     // assert the storage slot on latest block
-    stored = await web3.eth.getStorageAt(contractAddress, slot, latestHeight)
-    value = web3.eth.abi.decodeParameter('uint256', stored)
-    assert.equal(value, 100n)
+    value = await getDecodedStorage(contractAddress, STORAGE_SLOT, latestHeight)
+    assert.equal(value, SECOND_TEST_VALUE)

-    // // assert the storage slot on previous block
+    // assert the storage slot on previous block
-    stored = await web3.eth.getStorageAt(contractAddress, slot, latestHeight - 1n)
-    value = web3.eth.abi.decodeParameter('uint256', stored)
-    assert.equal(value, 1337n)
+    value = await getDecodedStorage(contractAddress, STORAGE_SLOT, latestHeight - 1n)
+    assert.equal(value, INITIAL_TEST_VALUE)

     // assert the storage slot on block of contract deployment
-    stored = await web3.eth.getStorageAt(contractAddress, slot, deployed.receipt.blockNumber)
-    value = web3.eth.abi.decodeParameter('uint256', stored)
-    assert.equal(value, 1337n)
+    value = await getDecodedStorage(contractAddress, STORAGE_SLOT, deployed.receipt.blockNumber)
+    assert.equal(value, INITIAL_TEST_VALUE)

     // assert the storage slot on block prior to contract deployment
-    stored = await web3.eth.getStorageAt(contractAddress, slot, deployed.receipt.blockNumber - 1n)
-    value = web3.eth.abi.decodeParameter('uint256', stored)
-    assert.equal(value, 0n)
+    value = await getDecodedStorage(contractAddress, STORAGE_SLOT, deployed.receipt.blockNumber - 1n)
+    assert.equal(value, 0n, 'Storage should be empty before contract deployment')

Committable suggestion skipped: line range outside the PR's diff.


38-50: 🛠️ Refactor suggestion

Refactor duplicated update logic.

The update logic is duplicated. Consider extracting it into a helper function and using named constants.

+    // Test values
+    const SECOND_TEST_VALUE = 100n
+
+    async function updateStorageValue(contract, value) {
+        const updateData = contract.methods.store(value).encodeABI()
+        const res = await helpers.signAndSend({
+            from: conf.eoa.address,
+            to: contract.options.address,
+            data: updateData,
+            value: '0',
+            gasPrice: conf.minGasPrice,
+        })
+        assert.equal(res.receipt.status, conf.successStatus)
+        return res
+    }
+
     // update the value on the contract
-    newValue = 100
-    updateData = deployed.contract.methods.store(newValue).encodeABI()
-    // store a value in the contract
-    res = await helpers.signAndSend({
-        from: conf.eoa.address,
-        to: contractAddress,
-        data: updateData,
-        value: '0',
-        gasPrice: conf.minGasPrice,
-    })
-    assert.equal(res.receipt.status, conf.successStatus)
+    res = await updateStorageValue(deployed.contract, SECOND_TEST_VALUE)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    // Test values
    const SECOND_TEST_VALUE = 100n

    async function updateStorageValue(contract, value) {
        const updateData = contract.methods.store(value).encodeABI()
        const res = await helpers.signAndSend({
            from: conf.eoa.address,
            to: contract.options.address,
            data: updateData,
            value: '0',
            gasPrice: conf.minGasPrice,
        })
        assert.equal(res.receipt.status, conf.successStatus)
        return res
    }

    // update the value on the contract
    res = await updateStorageValue(deployed.contract, SECOND_TEST_VALUE)
tests/web3js/eth_revert_reason_test.js (1)

43-56: 🛠️ Refactor suggestion

Enhance the transaction receipt polling mechanism.

The current implementation could benefit from additional safeguards and optimizations:

Consider applying these improvements:

 let txHash = response.body.result
 
 let rcp = null
+const maxAttempts = 50  // 25 seconds timeout
+let attempts = 0
+
 // wait until the transaction is executed & indexed, and its
 // receipt becomes available.
-while (rcp == null) {
+while (rcp == null && attempts < maxAttempts) {
     rcp = await helpers.callRPCMethod(
         'eth_getTransactionReceipt',
         [txHash]
     )
     if (rcp.body.result == null) {
         rcp = null
+        attempts++
+        // Add delay between attempts to prevent aggressive polling
+        await new Promise(resolve => setTimeout(resolve, 500))
     }
 }
+if (rcp == null) {
+    throw new Error('Timeout waiting for transaction receipt')
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    let txHash = response.body.result
    
    let rcp = null
    const maxAttempts = 50  // 25 seconds timeout
    let attempts = 0

    // wait until the transaction is executed & indexed, and its
    // receipt becomes available.
    while (rcp == null && attempts < maxAttempts) {
        rcp = await helpers.callRPCMethod(
            'eth_getTransactionReceipt',
            [txHash]
        )
        if (rcp.body.result == null) {
            rcp = null
            attempts++
            // Add delay between attempts to prevent aggressive polling
            await new Promise(resolve => setTimeout(resolve, 500))
        }
    }
    if (rcp == null) {
        throw new Error('Timeout waiting for transaction receipt')
    }
tests/e2e_web3js_test.go (1)

36-39: 💡 Codebase verification

Based on the extensive context gathered, I can now provide a final response regarding the skipped transaction traces test.

Transaction tracing functionality is actively maintained and recently enhanced

The test is temporarily skipped while the transaction tracing system undergoes active development. Recent commits show:

  1. Transaction tracing functionality is fully implemented in the codebase:

    • debug_traceTransaction, debug_traceBlockByNumber, and debug_traceBlockByHash endpoints are available
    • Support for multiple tracers (callTracer, prestateTracer, 4byteTracer, custom JavaScript tracers)
    • Comprehensive test coverage in debug_traces_test.js
  2. Recent improvements (last 6 months):

    • Error handling enhancements for transaction traces
    • Extraction of common trace block logic
    • Integration with metrics collection
    • Storage interface optimizations

The skip message is outdated as the functionality appears to be fully implemented and actively maintained. Consider removing the t.Skip() and enabling the test.

🔗 Analysis chain

Provide context for the re-addition of this test.

The skip message suggests this test existed before. Please clarify:

  1. Why was it removed initially?
  2. What conditions need to be met to add it back?
  3. Should this be tracked in a separate issue?

Also, consider moving this to a separate PR focused on transaction tracing functionality, as it appears unrelated to the current PR's objective of simplifying FixedHash usage.

Let's check the history of this test:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search git history for this test
git log -p --all -S "test transaction traces"

Length of output: 116


Script:

#!/bin/bash
# Let's try a broader search for any mentions of transaction traces in the codebase
rg -i "transaction.?trace" -A 2 -B 2

# Also search for the test file name pattern
rg "debug_traces_test" -A 2 -B 2

Length of output: 1659


Script:

#!/bin/bash
# Let's check the content of the debug_traces_test.js to understand the test functionality
cat tests/web3js/debug_traces_test.js

# Also check for any recent changes to transaction tracing functionality
git log -p --since="6 months ago" -- api/debug.go storage/index.go

Length of output: 61491

services/ingestion/engine.go (1)

54-54: 🛠️ Refactor suggestion

Add nil check for blocksProvider parameter.

The constructor should validate that the blocksProvider parameter is not nil, similar to how other critical dependencies might be validated.

 func NewEventIngestionEngine(
 	subscriber EventSubscriber,
 	blocksProvider *replayer.BlocksProvider,
 	store *pebble.Storage,
 	...
 ) *Engine {
+	if blocksProvider == nil {
+		panic("blocksProvider cannot be nil")
+	}
 	log = log.With().Str("component", "ingestion").Logger()
 	...

Also applies to: 71-71

tests/helpers.go (1)

156-156: 💡 Codebase verification

The RateLimit increase in tests is inconsistent with production defaults

The increased RateLimit of 500 in tests is inconsistent with the documented and configured production default of 50 requests/second. This test configuration is 10x higher than what users would experience in production, which could lead to:

  1. False positives in tests that wouldn't work in production
  2. Missing rate-limiting related issues during testing

Specific findings:

  • Production default is 50 req/s (cmd/run/cmd.go)
  • Documented as 50 req/s in README.md
  • Rate limiting is actively enforced in API endpoints (api/ratelimiter.go)
  • Test configuration is set to 500 req/s (tests/helpers.go)

Please either:

  • Adjust the test RateLimit to match production defaults
  • Document why tests require a higher rate limit
  • Consider using production defaults in integration tests to catch real-world issues
🔗 Analysis chain

Verify the 10x increase in RateLimit.

The RateLimit has been significantly increased from 50 to 500. While this aligns with changes in other test files, please verify:

  1. The reason for this specific increase
  2. Any potential impact on test stability
  3. Whether this is related to the re-indexing requirements
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other rate limit configurations to ensure consistency
echo "Searching for rate limit configurations..."
rg -i "ratelimit.*50|ratelimit.*500" --type go

Length of output: 339


Script:

#!/bin/bash
# Let's check the git history to understand the context of this change
git log -p --all -S "RateLimit.*50" --since="6 months ago"

# Also check for any related test changes or discussions
rg -i "test.*timeout|rate.*limit" --type go -A 3 -B 3

Length of output: 60


Script:

#!/bin/bash
# Let's check the current configuration and usage of RateLimit in the codebase
echo "Checking RateLimit usage in code..."
rg -i "ratelimit" --type go -A 3 -B 3

# Also check for any comments or documentation about rate limiting
echo "Checking for rate limit documentation..."
rg -i "rate.?limit" README.md docs/ 2>/dev/null

Length of output: 13006

services/requester/requester.go (1)

85-85: 🛠️ Refactor suggestion

TODO comment suggests changing evmHeight type from int64 to uint64

The TODO comment correctly identifies that block heights should be unsigned integers since they cannot be negative. This aligns with blockchain conventions where block heights are always non-negative.

Consider implementing this TODO by:

  1. Changing all method signatures to use uint64 instead of int64
  2. Updating the implementation to handle the type change
  3. Updating all callers of these methods

This change would:

  • Prevent potential issues with negative block heights
  • Better represent the domain concept
  • Improve type safety
services/replayer/blocks_provider.go (3)

79-85: ⚠️ Potential issue

Protect latestBlock with mutex for thread safety

The OnBlockReceived method updates bp.latestBlock without synchronization. If BlocksProvider is accessed by multiple goroutines, this can lead to race conditions.

Consider adding a mutex to protect access to bp.latestBlock, ensuring that reads and writes are thread-safe.


92-112: ⚠️ Potential issue

Ensure thread-safe access to latestBlock in GetSnapshotAt

The GetSnapshotAt method reads bp.latestBlock without any synchronization mechanism. Concurrent access might cause data races if latestBlock is modified by OnBlockReceived.

Implement synchronization, such as using a mutex, to guard reads and writes to bp.latestBlock.


29-51: 🛠️ Refactor suggestion

Improve error handling in GetHashFunc to avoid silent failures

In the GetHashFunc, errors from GetByHeight and block.Hash() are ignored, returning an empty hash. This can obscure underlying issues and make debugging difficult.

Consider logging the errors or propagating them for proper handling. For example:

 if err != nil {
-    return gethCommon.Hash{}
+    // Log or handle the error appropriately
+    // return gethCommon.Hash{}
+    return gethCommon.Hash{} // Or consider propagating the error
 }

Committable suggestion skipped: line range outside the PR's diff.

tests/web3js/debug_traces_test.js (2)

37-45: 🛠️ Refactor suggestion

Replace hardcoded addresses with variables

Using variables like conf.eoa.address and contractAddress instead of hardcoded addresses improves maintainability and flexibility, ensuring the tests remain valid even if the addresses change.

Apply this diff to replace hardcoded addresses:

- assert.equal(txTrace.from, '0xfacf71692421039876a5bb4f10ef7a439d8ef61e')
+ assert.equal(txTrace.from, conf.eoa.address)

- assert.equal(txTrace.to, '0x99a64c993965f8d69f985b5171bc20065cc32fab')
+ assert.equal(txTrace.to, contractAddress)

Also applies to: 94-103, 164-212


41-42: ⚠️ Potential issue

Correct the usage of assert.lengthOf with BigInt values

The assert.lengthOf function expects a number, but 9856n and 9806n are BigInt literals. Passing BigInts may cause errors or unexpected behavior. Convert BigInts to numbers or use numeric literals without the n suffix.

Apply this diff to fix the issue:

- assert.lengthOf(txTrace.input, 9856n)
- assert.lengthOf(txTrace.output, 9806n)
+ assert.lengthOf(txTrace.input, 9856)
+ assert.lengthOf(txTrace.output, 9806)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    assert.lengthOf(txTrace.input, 9856)
    assert.lengthOf(txTrace.output, 9806)
api/api.go (3)

288-288: ⚠️ Potential issue

Potential integer overflow when casting evmHeight to int64

Casting evmHeight (type uint64) to int64 may cause incorrect values or overflow if evmHeight exceeds math.MaxInt64. This could lead to unexpected behavior when evmHeight is larger than the maximum value for int64. Consider updating the GetBalance method in the evm interface to accept a uint64 parameter, or ensure that evmHeight does not exceed math.MaxInt64.


1089-1113: ⚠️ Potential issue

Inconsistent handling of block numbers between int64 and uint64

The getBlockNumber method now returns uint64, but it processes negative block numbers representing special cases (e.g., LatestBlockNumber, PendingBlockNumber). Casting negative int64 values to uint64 can result in unintended large positive numbers. It's important to handle negative block numbers properly to avoid incorrect block heights. Consider maintaining the return type as int64 or adjusting the handling logic to manage special cases without risky type casting.


1119-1144: ⚠️ Potential issue

Potential incorrect type conversion in resolveBlockNumber function

In resolveBlockNumber, height is an int64 which may be negative due to special block numbers. Casting a negative int64 to uint64 can produce incorrect results. Ensure that height is non-negative before casting, or change the function to consistently work with int64 to prevent such issues.

@franklywatson franklywatson merged commit 97481ae into feature/local-tx-reexecution Nov 6, 2024
2 checks passed
@franklywatson franklywatson deleted the simplify-fixed-hash-field branch November 6, 2024 00:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: ✅ Done
Development

Successfully merging this pull request may close these issues.

5 participants