diff --git a/CHANGELOG-3.1.md b/CHANGELOG-3.1.md index 1e513160f0e..fd73ad9f33f 100644 --- a/CHANGELOG-3.1.md +++ b/CHANGELOG-3.1.md @@ -10,7 +10,7 @@ See [code changes](https://github.com/coreos/etcd/compare/v3.1.15...v3.1.16) and ### etcd server - Fix [`mvcc` server panic from restore operation](https://github.com/coreos/etcd/pull/9775). - - Let's assume that a watcher is requested with a future revision X and sent to node A, which shortly becomes isolated from a network partition. Meanwhile, cluster makes progress and when the partition gets removed, the leader sends a snapshot to node A. Previously, if the snapshot's latest revision is still lower than the watch revision X, etcd server panicked during snapshot restore operation. + - Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, **etcd server panicked** during snapshot restore operation. - Now, this server-side panic has been fixed. ### Go diff --git a/CHANGELOG-3.2.md b/CHANGELOG-3.2.md index 777ace87a21..42e8de932be 100644 --- a/CHANGELOG-3.2.md +++ b/CHANGELOG-3.2.md @@ -11,7 +11,7 @@ See [code changes](https://github.com/coreos/etcd/compare/v3.2.20...v3.2.21) and - Fix [auth storage panic when simple token provider is disabled](https://github.com/coreos/etcd/pull/8695). - Fix [`mvcc` server panic from restore operation](https://github.com/coreos/etcd/pull/9775). - - Let's assume that a watcher is requested with a future revision X and sent to node A, which shortly becomes isolated from a network partition. Meanwhile, cluster makes progress and when the partition gets removed, the leader sends a snapshot to node A. Previously, if the snapshot's latest revision is still lower than the watch revision X, etcd server panicked during snapshot restore operation. + - Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, **etcd server panicked** during snapshot restore operation. - Now, this server-side panic has been fixed. ### Go diff --git a/CHANGELOG-3.3.md b/CHANGELOG-3.3.md index 1eb22c519c8..8665f793b83 100644 --- a/CHANGELOG-3.3.md +++ b/CHANGELOG-3.3.md @@ -13,7 +13,7 @@ See [code changes](https://github.com/coreos/etcd/compare/v3.3.5...v3.3.6) and [ - Previously, when auth token is an empty string, it returns [`failed to initialize the etcd server: auth: invalid auth options` error](https://github.com/coreos/etcd/issues/9349). - Fix [auth storage panic on server lease revoke routine with JWT token](https://github.com/coreos/etcd/issues/9695). - Fix [`mvcc` server panic from restore operation](https://github.com/coreos/etcd/pull/9775). - - Let's assume that a watcher is requested with a future revision X and sent to node A, which shortly becomes isolated from a network partition. Meanwhile, cluster makes progress and when the partition gets removed, the leader sends a snapshot to node A. Previously, if the snapshot's latest revision is still lower than the watch revision X, etcd server panicked during snapshot restore operation. + - Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, **etcd server panicked** during snapshot restore operation. - Now, this server-side panic has been fixed. ### Go diff --git a/CHANGELOG-3.4.md b/CHANGELOG-3.4.md index c83065ec2c8..c964791edac 100644 --- a/CHANGELOG-3.4.md +++ b/CHANGELOG-3.4.md @@ -41,12 +41,15 @@ See [code changes](https://github.com/coreos/etcd/compare/v3.3.0...v3.4.0) and [ ### Breaking Changes +- Make [`ETCDCTL_API=3 etcdctl` default](https://github.com/coreos/etcd/issues/9600). + - Now, `etcdctl set foo bar` must be `ETCDCTL_API=2 etcdctl set foo bar`. + - Now, `ETCDCTL_API=3 etcdctl put foo bar` could be just `etcdctl put foo bar`. - **Remove `etcd --ca-file` flag**, instead [use `--trusted-ca-file`](https://github.com/coreos/etcd/pull/9470) (`--ca-file` has been deprecated since v2.1). - **Remove `etcd --peer-ca-file` flag**, instead [use `--peer-trusted-ca-file`](https://github.com/coreos/etcd/pull/9470) (`--peer-ca-file` has been deprecated since v2.1). - **Remove `pkg/transport.TLSInfo.CAFile` field**, instead [use `pkg/transport.TLSInfo.TrustedCAFile`](https://github.com/coreos/etcd/pull/9470) (`CAFile` has been deprecated since v2.1). -- Deprecated `latest` [release container](https://console.cloud.google.com/gcr/images/etcd-development/GLOBAL/etcd) tag. +- Deprecate `latest` [release container](https://console.cloud.google.com/gcr/images/etcd-development/GLOBAL/etcd) tag. - **`docker pull gcr.io/etcd-development/etcd:latest` would not be up-to-date**. -- Deprecated [minor](https://semver.org/) version [release container](https://console.cloud.google.com/gcr/images/etcd-development/GLOBAL/etcd) tags. +- Deprecate [minor](https://semver.org/) version [release container](https://console.cloud.google.com/gcr/images/etcd-development/GLOBAL/etcd) tags. - `docker pull gcr.io/etcd-development/etcd:v3.3` would still work. - **`docker pull gcr.io/etcd-development/etcd:v3.4` would not work**. - Use **`docker pull gcr.io/etcd-development/etcd:v3.4.x`** instead, with the exact patch version. @@ -229,7 +232,7 @@ See [security doc](https://github.com/coreos/etcd/blob/master/Documentation/op-g - Which possibly causes [missing events from "unsynced" watchers](https://github.com/coreos/etcd/issues/9086). - A node gets network partitioned with a watcher on a future revision, and falls behind receiving a leader snapshot after partition gets removed. When applying this snapshot, etcd watch storage moves current synced watchers to unsynced since sync watchers might have become stale during network partition. And reset synced watcher group to restart watcher routines. Previously, there was a bug when moving from synced watcher group to unsynced, thus client would miss events when the watcher was requested to the network-partitioned node. - Fix [`mvcc` server panic from restore operation](https://github.com/coreos/etcd/pull/9775). - - Let's assume that a watcher is requested with a future revision X and sent to node A, which shortly becomes isolated from a network partition. Meanwhile, cluster makes progress and when the partition gets removed, the leader sends a snapshot to node A. Previously, if the snapshot's latest revision is still lower than the watch revision X, etcd server panicked during snapshot restore operation. + - Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, **etcd server panicked** during snapshot restore operation. - Now, this server-side panic has been fixed. - Fix [server panic on invalid Election Proclaim/Resign HTTP(S) requests](https://github.com/coreos/etcd/pull/9379). - Previously, wrong-formatted HTTP requests to Election API could trigger panic in etcd server. @@ -311,7 +314,7 @@ Note: **v3.5 will deprecate `etcd --log-package-levels` flag for `capnslog`**; ` ### gRPC proxy - Fix [etcd server panic from restore operation](https://github.com/coreos/etcd/pull/9775). - - Let's assume that a watcher is requested with a future revision X and sent to node A, which shortly becomes isolated from a network partition. Meanwhile, cluster makes progress and when the partition gets removed, the leader sends a snapshot to node A. Previously, if the snapshot's latest revision is still lower than the watch revision X, etcd server panicked during snapshot restore operation. + - Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, **etcd server panicked** during snapshot restore operation. - Especially, gRPC proxy was affected, since it detects a leader loss with a key `"proxy-namespace__lostleader"` and a watch revision `"int64(math.MaxInt64 - 2)"`. - Now, this server-side panic has been fixed.