Skip to content

Releases: digital-asset/canton

canton v2.9.5

23 Oct 08:54
c198fb6
Compare
Choose a tag to compare

Release of Canton 2.9.5

Canton 2.9.5 has been released on October 22, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release of Canton that fixes bugs, including two critical bugs that can corrupt the state of a participant node when retroactive interfaces or migrated contracts from protocol version 3 are used.

Bugfixes

(24-020, Critical): Participant crashes due to retroactive interface validation

Description

The view reinterpreation of an exercise of a retroactive interface may fail because the engine does not explicitliy request the interface package. This can lead to a ledger fork as participants come to different conclusions.

Affected Deployments

Participant

Affected Versions

2.5, 2.6, 2.7, 2.8.0-2.8.9, 2.9.0-2.9.4

Impact

A participant crashes during transaction validation when using retroactive interfaces.

Symptom

"Validating participant emits warning:

Workaround

None

Likeliness

Very likely for all multi participant setups that uses retroactive interface instances.

Recommendation

Upgrade to 2.9.5


LOCAL_VERDICT_FAILED_MODEL_CONFORMANCE_CHECK(5,571d2e8a): Rejected transaction due to a failed model conformance check: DAMLeError(
  Preprocessing(
    Lookup(
      NotFound(
        Package(

And then emits an error:

An internal error has occurred.
java.lang.IllegalStateException: Mediator approved a request that we have locally rejected

Workaround

None

Likeliness

Very likely for all multi participant setups that uses retroactive interface instances.

Recommendation

Upgrade to 2.9.5

(24-024, Critical): Participant incorrectly handles unauthenticated contract IDs in PV5

Issue Description

Contracts created on participants running PV3 have an unauthenticated contract ID. When these participants are upgraded to PV5 without setting the allow-for-unauthenticated-contract-ids flag to true, any submitted transaction that uses such unauthenticated contract IDs will produce warnings during validation, but also put the participants in an incorrect state. From then on, the participant will not output any ledger events any more and fail to reconnect to the domain.

Affected Deployments

Participant

Affected Versions

2.9.0-2.9.4

Impact

The participant is left in a failed state.

Symptom

Connecting to the domain fails with an internal error IllegalStateException: Cannot find event for sequenced in-flight submission.

The participant does not emit any ledger events any more.

Workaround

No workaround by clients possible. Support and engineering can try to fix the participants by modifying the participant's database tables.

Likeliness

Needs a submission request using a contract with unauthenticated contract ID. This can only happen for participants who have been migrated from using PV3 to PV5, and have not set the flag to allow unauthenticated contracts on all involved participants.

Recommendation

Upgrade during the next maintenance window to a version with the fix.
If an upgrade is not possible and old contracts from PV3 are used, enable the allow-for-unauthenticated-contract-ids flag on all the participants.

(24-026, High): Hard Synchronization Domain Migration fails to check for in-flight transactions

Issue Description

Since 2.9.0, the Hard Synchronization Domain Migration command repair.migrate_domain aborts when it detects in-flight submissions on the participant. However, it should also check for in-flight transactions.

Affected Deployments

Participant

Affected Versions

2.9.0-2.9.4

Impact

Performing a Hard Synchronization Domain Migration while there are still in-flight submissions and transactions may result in a ledger-fork.

Symptom

Ledger-fork after running the Hard Synchronization Domain Migration command repair.migrate_domain that may result in ACS commitment mismatches.

Workaround

Follow the documented steps, in particular ensure that there is no activity on all participants before proceeding with a Hard Synchronization Domain Migration.

Likeliness

The bug only manifests when the operator skips the documented step for the Hard Synchronization Domain Migration to ensure that there is no activity on all participants anymore in combination with still having in-flight transactions when the migration executes.

Recommendation

Upgrade to 2.9.5 to properly safe-guard against running the Hard Synchronization Domain Migration command repair.migrate_domain while there are still in-flight submissions or transactions.

(24-021, Medium): Participant replica fails to become active

Issue Description

A participant replica fails to become active under certain database network conditions. The previously active replica fails to fully transition to passive due to blocked database connection health checks, which leaves the other replica to fail to transition to active. Eventually the database health checks get unblocked and the replica transitions to passive, but the other replica does not recover from the previous active transition failure, which leaves both replicas passive.

Affected Deployments

Participant

Affected Versions

All 2.3-2.7
2.8.0-2.8.10
2.9.0-2.9.4

Impact

Both participant replicas remain passive and do not serve transactions.

Symptom

The transition to active failed on a participant due to maximum retries exhausted:

2024-09-02T07:08:56,178Z participant2 [c.d.c.r.DbStorageMulti:participant=participant1] [canton-env-ec-36] ERROR dd:[ ] c.d.c.r.DbStorageMulti:participant=participant1 tid:effa59a8f7ddec2e132079f2a4bd9885 - Failed to transition replica state
com.daml.timer.RetryStrategy$TooManyAttemptsException: Gave up trying after Some(3000) attempts and 300.701142545 seconds.

Workaround

Restart both replicas of the participant

Likeliness

Possible under specific database connection issues

Recommendation

Upgrade to the next patch release during regular maintenance window.

(24-022, Medium): Participant replica does not clear package service cache

Issue Description

When a participant replica becomes active, it does not refresh its package service cache. If a vetting attempt is made on the participant that fails because the package is not uploaded, the "missing package" response is cached. If the package is then uploaded to another replica, and we switch to the original participant, this package service cache will still record the package as nonexistent. When the package is used in a transaction, we will get a local model conformance error as the transaction validator cannot find the package, whereas other parts of the participant that don't use the package service can successfully locate it.

Affected Deployments

Participant

Affected Versions

2.8.0-2.8.10, 2.9.0-2.9.4

Impact

Replica crashes during transaction validation.

Symptom

Validating participant emits warning:


LOCAL_VERDICT_FAILED_MODEL_CONFORMANCE_CHECK(5,a2b60642): Rejected transaction due to a failed model conformance check: UnvettedPackages

And then emits an error:

An internal error has occurred.
java.lang.IllegalStateException: Mediator approved a request that we have locally rejected

Workaround

Restart recently active replica

Likeliness

Likely to happen in any replicated participant setup with frequent vetting attempts and switches between active and passive replicated participants between those vetting attempts.

Recommendation

Users are advised to upgrade to the next patch release (2.9.5) during their maintenance window.

(24-023, Low): Participant fails to start if quickly acquiring and then losing DB connection during bootstrap

Issue Description

When a participant starts up and acquires the active lock, the participant replica initializes its storage and begins its bootstrap logic. If during the bootstrap logic and before the replica attempts to initializate its identity, the replica loses the DB connection, bootstrapping will be halted until its identity is initialized by another replica or re-acquires the lock. When the lock is lost, the replica manager will attempt to transition the participant state to passive, which assumes the participant has been initialized fully, which in this case it hasn't. Therefore the passive transition waits indefinitely.

Affected Deployments

Participant

Affected Versions

2.8.0-2.8.10, 2.9.0-2.9.4

Impact

Replica gets stuck transitioning to passive state during bootstrap.

Symptom

Participant keeps emitting info logs as follows indefinitely

Replica state update to Passive has not completed after

Workaround

Restart the node

Likeliness

Exceptional, requires acquiring then losing the DB connection with a precise timing during bootstrap of the node.

Recommendation

Users are advised to upgrade to the next patch release (2.9.5) during their maintenance window.

(24-025, Low): Commands for single key rotation for sequencer and mediator node fail

Description

The current commands for single key rotation with sequencer and mediator nodes (rotate_node_key
and rotate_kms_node_key) fail because they do not have the necessary domain manager reference needed to find
the old key and export the new key.

Affected Deployments

Sequencer and mediator nodes

Affected Versions

All 2.3-2.7, 2.8.0-2.8.10, 2.9.0-2.9.4

Impact

Key rotation for individual keys with sequencer or mediator nodes c...

Read more

canton v2.8.10

16 Sep 12:35
37f1308
Compare
Choose a tag to compare

Release of Canton 2.8.10

Canton 2.8.10 has been released on September 16, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release that fixes a critical bug for retroactive interfaces.

Bugfixes

(24-020, Critical): Participant crashes due to retroactive interface validation

Description

The view reinterpretation of an exercise of a retroactive interface may fail because the engine does not explicitly request the interface package. This can lead to a ledger fork as participants come to different conclusions.

Affected Deployments

Participant

Affected Versions

2.5, 2.6, 2.7, 2.8.0-2.8.9

Impact

A participant crashes during transaction validation when using retroactive interfaces.

Symptom

Validating participant emits warning:


LOCAL_VERDICT_FAILED_MODEL_CONFORMANCE_CHECK(5,571d2e8a): Rejected transaction due to a failed model conformance check: DAMLeError(
  Preprocessing(
    Lookup(
      NotFound(
        Package(

And then emits an error:

An internal error has occurred.
java.lang.IllegalStateException: Mediator approved a request that we have locally rejected

Workaround

None

Likeliness

Very likely for all multi participant setups that uses retroactive interface instances.

Recommendation

Upgrade to 2.8.10

Compatibility

The following Canton protocol versions are supported:

Dependency Version
Canton protocol versions 3, 4, 5

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.70+15-CA (build 11.0.22+7-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.20 (Debian 12.20-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.16 (Debian 13.16-1.pgdg120+1), PostgreSQL 14.13 (Debian 14.13-1.pgdg120+1), PostgreSQL 15.8 (Debian 15.8-1.pgdg120+1)
Oracle 19.20.0

canton v2.9.4

26 Aug 07:48
e0af6b1
Compare
Choose a tag to compare

Release of Canton 2.9.4

Canton 2.9.4 has been released on August 23, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

  • Protocol version 6 has had its status changed from "Beta" to "Unstable" due to a number of rare, but grave bugs in the new beta smart contract upgrading feature
  • Minor improvements around logging and DAR upload validation

What’s New

Protocol Version 6 Marked as Unstable

Background

In Daml 2.9 we released a smart contract upgrading feature in Beta. Underlying the feature are a new protocol version (6), and a new Daml-LF version (1.16) that were also released in Beta status.

Beta status is intended to designate features that do not yet have full backwards compatibility guarantees, or may still have some limitations, but are ready to be supported for select customers under an "initial availability" arrangement.

A number of rare, but grave bugs in the new beta smart contract upgrading feature have been discovered during internal testing and will require breaking changes at the protocol level to fix. As a consequence data continuity will be broken in the sense that smart contracts created on protocol version 6 in 2.9.1-2.9.4 will not be readable in future versions.

The 2.9 release as a whole is robust and functional. Only Beta features are affected.

Specific Changes

To prevent any accidental corruption of prod, or even pre-prod systems, protocol version 6 has had its status changed from "Beta" to "Unstable" to clearly designate that it do not have appropriate guarantees.

Impact and Migration

Customers who are not using beta features or protocol version 6 can continue to use the 2.9 release. Customers using beta features are advised to move their testing of these features to the 2.10 release line.

To continue to use the beta features in 2.9.4 it will be necessary to enable support for unstable features.

See the user manual section on how to enable unsupported features to find out how this is done.

Minor Improvements

  • Fix one issue preventing a participant to connect to an old domain even if they support a common protocol version.
  • Startup errors due to TLS issues / misconfigurations are now correctly logged via the regular canton logging tooling instead of appearing only on stdout.
  • Added extra validation to prevent malformed DARs from being uploaded

Compatibility

The following Canton protocol and Ethereum sequencer contract versions are supported:

Dependency Version
Canton protocol versions 5

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.72+19-CA (build 11.0.23+9-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.20 (Debian 12.20-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.16 (Debian 13.16-1.pgdg120+1), PostgreSQL 14.13 (Debian 14.13-1.pgdg120+1), PostgreSQL 15.8 (Debian 15.8-1.pgdg120+1)
Oracle 19.20.0

canton v2.9.3

22 Jul 09:57
558d88e
Compare
Choose a tag to compare

Release of Canton 2.9.3

Canton 2.9.3 has been released on July 22, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release of Canton that fixes one high risk bug, which can
crash a participant node due to out of memory, and two low risk bugs.

Bugfixes

(24-017, High): Participants crash with an OutOfMemoryError

Description

The TaskScheduler keeps a huge number of tasks into a queue. The queue has been newly introduced. Therefore the memory comsumption (HEAP) is much higher than in previous versions. The queue size is proportional to the number of requests processed during the decision time.

Affected Deployments

Participant

Impact

Memory consumption is much higher than in previous Canton versions.

Symptom

The participant crashes with an OutOfMemoryError.

Workaround

Test the participant under load, increase the heap size accordingly. If possible, decrease confirmation response timeout and mediator reaction timeout.

Likeliness

High likelihood under high load and with large confirmation response and mediator reaction timeouts.

Recommendation

Upgrade to 2.9.3.

(24-018, Low): Participants log "ERROR: The check for missing ticks has failed unexpectedly"

Description

The TaskScheduler monitoring crashes and logs an Error.

Affected Deployments

Participant

Impact

The monitoring of the task scheduler crashes.

Symptom

You see an error in the logs: ERROR: The check for missing ticks has failed unexpectedly..

Workaround

If you need the monitoring to trouble-shoot missing ticks, restart the participant to restart the monitoring.

Likeliness

This will eventually occur on every system.

Recommendation

Ignore the message until upgrading to 2.9.3.

(24-015, Low): Pointwise flat transaction Ledger API queries can unexpectedly return TRANSACTION_NOT_FOUND

Description

When a party submits a command that has no events for contracts whose stakeholders are amongst the submitters, the resulted transaction cannot be queried by pointwise flat transaction Ledger API queries. This impacts GetTransactionById, GetTransactionByEventId and SubmitAndWaitForTransaction gRPC endpoints.

Affected Deployments

Participant

Impact

User might perceive that a command was not successful even if it was.

Symptom

TRANSACTION_NOT_FOUND is returned on a query that is expected to succeed.

Workaround

Query instead the transaction tree by transaction-id to get the transaction details.

Likeliness

Lower likelihood as commands usually have events whose contracts' stakeholders are amongst the submitting parties.

Recommendation

Users are advised to upgrade to the next patch release during their maintenance window.

Compatibility

The following Canton protocol and Ethereum sequencer contract versions are supported:

Dependency Version
Canton protocol versions 5, 6*

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.72+19-CA (build 11.0.23+9-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.19 (Debian 12.19-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.15 (Debian 13.15-1.pgdg120+1), PostgreSQL 14.12 (Debian 14.12-1.pgdg120+1), PostgreSQL 15.7 (Debian 15.7-1.pgdg120+1)
Oracle 19.20.0

canton v2.9.1

16 Jul 10:16
06451f6
Compare
Choose a tag to compare

Release of Canton 2.9.1

Canton 2.9.1 has been released on July 15, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

We are excited to announce Canton 2.9.1, which offers additional features and
improvements:

  • KMS drivers (Beta)
  • support for smart contract upgrades (Beta)
  • operational improvements around monitoring, liveness, and logging

See below for details.

What’s New

Breaking: Protocol version should be set explicitly

Until now, the configuration of a domain was picking the latest protocol version by default.
Since the protocol version is an important parameter of the domain, having this value set behind
the scenes caused unwanted behavior.

You now must specify the protocol version for your domain:

myDomain {
  init.domain-parameters.protocol-version = 5
}

For a domain manager:

domainManager {
  init.domain-parameters.protocol-version = 5
}

You can read more about protocol version in the docs.
If you are unsure which protocol version to pick:

  • Use the last one supported by your binary (see docs).
  • Ensure all your environments use the same protocol version: you should not use one protocol version in
    your test environment and another one in production.

Breaking: Protocol version 3 and 4 discontinuation

This Canton version requires protocol version at least 5.

If your domain is running protocol version 5, you can replace the binaries and apply the database migrations.

If you have a domain running protocol version 3 or 4, you first need to bootstrap a new domain running protocol version
at least 5 and then perform a hard domain migration.

Upgrading instructions can be found in the documentation:
upgrading
manual

KMS Drivers

The Canton protocol relies on a number of cryptographic operations such as
asymmetric encryption and digital signatures. To maximize the operational
security of a Canton node the corresponding private keys should not be stored or
processed in cleartext. A Key Management System (KMS) or Hardware Security
Module (HSM) allows us to perform such cryptographic operations where the
private key resides securely inside the KMS/HSM. All nodes in Canton can make
use of a KMS.

AWS KMS and Google Cloud KMS are supported as of Canton v2.7. To broaden the
support of other KMSs and HSMs, Canton v2.9 introduces a plugin approach, called
KMS Drivers, which allows the implementation of custom integrations. You can
find more information on how to develop a KMS driver in the KMS Driver Guide.

Smart Contract Updates

The feature allows Daml models (packages in DAR files) to be updated on Canton
transparently, provided some guidelines in making the changes are followed. For
example, you can fix an application bug by uploading the DAR of the fixed
package. This is a Beta feature that requires LF 1.16 & Canton Protocol version
6. Please refer to the Daml enterprise release
notes

for more information on this feature.

Mediator liveness health service and built-in watchdog

Previously a mediator node that had irrecoverably lost its connection to a Canton domain
would not exit and would continue to report SERVING on liveness health endpoint.
This lead to mediator nodes not being able to automatically recover from unexpected failures.
Now sequencer connection status of a mediator node is connected to the liveness health endpoint,
allowing for external monitoring and automated intervention (i.e. by setting up k8s liveness probes).
Additionally, for systems not using k8s, it is possible to enable a built-in node watchdog that will monitor
liveness health endpoint and will forcefully make the node exit if it's no longer alive.
By default, the watchdog is disabled and can be enabled by setting the following configuration:

canton.mediators.<mediator_node>.parameters.watchdog = {
    enabled = true
    checkInterval = 15s // default value
    killDelay = 30s // default value
}

Configuration parameters are:

  • checkInterval - interval at which the watchdog will check the liveness health endpoint
  • killDelay - delay after the watchdog has detected that the node is no longer alive
    before it forcefully exits the node

Paging in Party Management

Background

Being able to retrieve all parties known to a participant in a paging fashion has been a frequently requested feature. When the number of parties on a participant exceeds tens of thousands, trying to deliver them all in a single message can present many challenges: corresponding db operation can take a long time, internal memory buffers within the participant or the client application can be exhausted, finally, the maximum size of the gRPC message can be exceeded. In extreme cases this could lead to an OOM crash.

Specific Changes

The ListKnownParties method on the PartyManagementService now takes two additional parameters. The new page_size field determines the maximum number of results to be returned by the server. The new page_token field on the other hand is a continuation token that signals to the server to fetch the next page containing the results. Each ListKnownPartiesResponse response contains a page of parties and a next_page_token field that can be used to populate the page_token field for a subsequent request. When the last page is reached, the next_page_token is empty. The parties on each page are sorted in ascending order according to their ids. The pages themselves are sorted as well.

The GetLedgerApiVersion method of the VersionService contains new features.party_management field within the returned GetLedgerApiVersionResponse message. It describes the capabilities of the party management through a sub-message called PartyManagementFeature. At the moment it contains just one field the max_parties_page_size which specifies the maximum number of parties that will be sent per page by default.

Configuration

The default maximum size of the page returned by the participant in response to the ListKnownParties call has been set to 10'000. It can be modified through the max-parties-page-size entry

canton.participants.participant.ledger-api.party-management-service.max-parties-page-size=777

Impact and Migration

The change may have an impact on your workflow if your participant contains more than 10'000 parties and you rely on the results of ListKnownParties containing all parties known to the participant. You will need to do one of two things:

  • Change your workflow to utilize a series of ListKnownParties calls chained by page tokens instead of one, This is the recommended approach.
  • Change your configuration to increase the maximum page returned by the participant.

Node's Exit on Fatal Failures

When a node encounters a fatal failure that Canton cannot handle gracefully yet, the new default behavior is that the node will exit/stop the process and relies on an external process or service monitor to restart the node's process.

The following failures are considered fatal and now leads to an exit of the process:

  1. Unhandled exceptions when processing events from a domain, which previously lead to a disconnect from that domain.
  2. Failed transition from an active replica to a passive replica, which may result in an invalid state of the node.
  3. Failed transition from a passive replica to an active replica, which may result in an invalid state of the node.

The new behavior can be reverted by setting: canton.parameters.exit-on-fatal-failures = false in the configuration.

Minor Improvements

Logging of Conflict Reason

When a command is rejected due to conflict (e.g. usage of an inactive contract),
every participant detecting the conflict will now log the resource causing the conflict at INFO level.
This change affects the following error codes:
LOCAL_VERDICT_LOCKED_CONTRACTS, LOCAL_VERDICT_LOCKED_KEYS, LOCAL_VERDICT_INACTIVE_CONTRACTS,
LOCAL_VERDICT_DUPLICATE_KEY, LOCAL_VERDICT_INCONSISTENT_KEY, LOCAL_VERDICT_CREATES_EXISTING_CONTRACTS

Repair service improvements

  • Prevent concurrent execution of a domain (re-)connection while repair operations are in flight.
  • Commands ignore_events and unignore_events are now also available on remote nodes.

Error code changes

  • PACKAGE_NAMES_NOT_FOUND is introduced for reporting package-name identifiers that could not be found.
  • When an access token expires and ledger api stream is terminated an ABORTED(ACCESS_TOKEN_EXPIRED) error is returned.
  • DAR_NOT_VALID_UPGRADE is introduced for reporting that the uploaded DAR is not upgrade-compatible with other existing DARs on the participant.
  • KNOWN_DAR_VERSION is introduced for reporting that the uploaded DAR name and version is already known to the participant.
  • NO_INTERNAL_PARTICIPANT_DATA_BEFORE is introduced and returned when participant.pruning.find_safe_offset is invoked with a timestamp before the earliest
    known internal participant data.

Remote Log Level Changes

The log levels and last errors can now be accessed remotely from the logging
command group on remote consoles.

New Block Sequencer Metrics

As an early access feature, the block sequencer now exposes various labeled ...

Read more

canton v2.8.9

10 Jul 15:26
f0ddd4b
Compare
Choose a tag to compare

Release of Canton 2.8.9

Canton 2.8.9 has been released on July 10, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release that fixes a critical bug on Oracle, a regression of a critical bug, and provides minor logging improvements.

Bugfixes

Fix Regression of 24-002

A regression of the bug fix for 24-002, which was released in 2.7.7, was introduced in 2.8.x where the bug 24-002 was not properly fixed.
That regression is resolved as part of 2.8.9 and 24-002 properly fixed.

(24-016, Critical): Incorrect key updates for transactions with many key updates on Oracle

Issue Description

If a very large transaction with more than 1000 contract key updates is submitted to the participant,
the Oracle SQL query to update the contract keys table will fail.
The error fails to be correctly propagated and the participant continues processing.
As a result, the contract keys uniqueness check table will contain invalid entries, leading to key uniqueness not being checked correctly.

Affected Deployments

Participant on Oracle

Affected Versions

2.3.0-2.3.19
2.4-2.6,
2.7.0-2.7.9
2.8.0-2.8.8

Impact

Key uniqueness check will not work for keys that are updated by very large transactions.

Symptom

Participant logs the warning "java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000"

Workaround

Ensure you do not submit large transactions that create and archive more than 999 contracts that use contract keys.

Likeliness

Deterministic with very large transactions that yield more than 1000 key updates.

Recommendation

Upgrade during the next maintenance window. If you are submitting very large transactions with many contract key updates, update immediately.

What’s New

Minor Improvements

Logging of Conflict Reason

When a command is rejected due to conflict (e.g. usage of an inactive contract),
every participant detecting the conflict will now log the resource causing the conflict at INFO level.
This change affects the following error codes:
LOCAL_VERDICT_LOCKED_CONTRACTS, LOCAL_VERDICT_LOCKED_KEYS, LOCAL_VERDICT_INACTIVE_CONTRACTS,
LOCAL_VERDICT_DUPLICATE_KEY, LOCAL_VERDICT_INCONSISTENT_KEY, LOCAL_VERDICT_CREATES_EXISTING_CONTRACTS

Compatibility

The following Canton protocol versions are supported:

Dependency Version
Canton protocol versions 3, 4, 5

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.70+15-CA (build 11.0.22+7-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.19 (Debian 12.19-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.15 (Debian 13.15-1.pgdg120+1), PostgreSQL 14.12 (Debian 14.12-1.pgdg120+1), PostgreSQL 15.7 (Debian 15.7-1.pgdg120+1)
Oracle 19.20.0

canton v2.3.20

10 Jul 15:20
f0ddd4b
Compare
Choose a tag to compare

Release of Canton 2.3.20

Canton 2.3.20 has been released on July 10, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release, fixing critical issue which can occur if overly large transactions are submitted to the participant.

Bugfixes

(24-016, Critical): Incorrect key updates for transactions with many key updates on Oracle

Issue Description

If a very large transaction with more than 1000 contract key updates is submitted to the participant,
the Oracle SQL query to update the contract keys table will fail.
The error fails to be correctly propagated and the participant continues processing.
As a result, the contract keys uniqueness check table will contain invalid entries, leading to key uniqueness not being checked correctly.

Affected Deployments

Participant on Oracle

Affected Versions

2.3.0-2.3.19
2.4-2.6,
2.7.0-2.7.9
2.8.0-2.8.8

Impact

Key uniqueness check will not work for keys that are updated by very large transactions.

Symptom

Participant logs the warning "java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000"

Workaround

Ensure you do not submit large transactions that create and archive more than 999 contracts that use contract keys.

Likeliness

Deterministic with very large transactions that yield more than 1000 key updates.

Recommendation

Upgrade during the next maintenance window. If you are submitting very large transactions with many contract key updates, update immediately.

Compatibility

The following Canton protocol and Ethereum sequencer contract versions are supported:

Dependency Version
Canton protocol versions 2.0.0, 3.0.0
Ethereum contract versions 1.0.0, 1.0.1

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM 18.9 (build 11.0.16+8, mixed mode, sharing)
Postgres postgres (PostgreSQL) 14.12 (Debian 14.12-1.pgdg120+1)
Oracle 19.15.0
Besu besu/v21.10.9/linux-x86_64/openjdk-java-11
Fabric 2.2.2

canton v2.9.0-rc2

02 Jul 09:05
8068166
Compare
Choose a tag to compare

Release candidates such as 2.9.0-rc2 don't come with release notes

canton v2.8.8

26 Jun 15:23
8068166
Compare
Choose a tag to compare

Release of Canton 2.8.8

Canton 2.8.8 has been released on June 26, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release including bug fixes: two in the repair service and one memory leak with metrics collection.

Bugfixes

(24-012, Major): repair.purge appears not to clean up key indices in cache

Issue Description

When contracts containing keys are purged using the repair service, the corresponding contract keys remain in the ledger API cache. Then, when a subsequent client command requests to lookup the contract behind the spent key, it is rejected already at the interpretation time in the originating participant

Affected Deployments

Participant

Affected Versions

All 2.3-2.7, 2.8.0-2.8.7

Impact

Participant is unable to reuse the affected keys in subsequent commands and requires a restart

Symptom

Participant log file contains the following error when new transaction tries to use the spent key: NOT_FOUND: CONTRACT_NOT_FOUND(11,a61f3d3c): Contract could not be found with id...

Workaround

Restart the participant as the problem only affects the in-memory data

Likeliness

Repair purge must be used followed by an attempt to use the key in a follow-up transaction

Recommendation

Users planning to use repair service should upgrade to 2.8.8

(24-013, Minor): repair.migrate_domain prevents participant pruning

Issue Description

After a hard domain migration the source domain RequestJournalStore clean head check throws an IllegalArgumentException on behalf of the inactive domain and prevents participant pruning.

Affected Deployments

Participant

Affected Versions

All 2.3-2.7, 2.8.0-2.8.7

Impact

Participant is unable to prune.

Symptom

Participant log file contains the following errors when attempting to prune: IllegalArgumentException: Attempted to prune at timestamp which is not earlier than _ associated with the clean head

Workaround

None except manually updating the RequestJournalStore database to set an artificially large clean head counter

Likeliness

Likely after a hard domain migration

Recommendation

Users are advised to upgrade to 2.8.8

(24-014, Major): Memory leak in the metrics associated with the grpc statistics

Issue Description

Canton components slowly accumulate memory that is not reclaimed in garbage collection. Typically it is at a rate of 250MB per week. This happens only if prometheus exporter is configured in the canton configuration file and a periodic metric read is performed by a prometheus agent.

Affected Deployments

All node types

Affected Versions

All 2.3-2.7, 2.8.0-2.8.7

Impact

Canton node can crash with an out-of-memory error (OOM).

Symptom

Node crashes. Memory dump shows an excessive memory consumption for io.opentelemetry.api.internal.InternalAttributeKeyImpl.

Workaround

Run a health dump on an affected node. This drains the collected metrics and reclaims associated memory.

Likeliness

Likely for anyone using prometheus agent.

Recommendation

Users are advised to upgrade to 2.8.8

Compatibility

The following Canton protocol versions are supported:

Dependency Version
Canton protocol versions 3, 4, 5

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.70+15-CA (build 11.0.22+7-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.19 (Debian 12.19-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.15 (Debian 13.15-1.pgdg120+1), PostgreSQL 14.12 (Debian 14.12-1.pgdg120+1), PostgreSQL 15.7 (Debian 15.7-1.pgdg120+1)
Oracle 19.20.0

canton v2.9.0-rc1

20 Jun 10:14
a828272
Compare
Choose a tag to compare

Release candidates such as 2.9.0-rc1 don't come with release notes