You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey just wanted to drop this in as a potential issue. I've been using the cloudnative-pg/pgvectors image in order to utilize immich in my homelab. I just had shrink my cluster due to replacing a node, and when the new hardware was up, cnpg was unable to join a new pg instance to the cluster, registering this error:
{"level":"info","ts":"2024-08-22T14:36:38Z","logger":"pg_basebackup","msg":"WARNING: aborting backup due to backend exiting before pg_backup_stop was called","pipe":"stderr","logging_pod":"postgres16-8-join"}
{"level":"info","ts":"2024-08-22T14:36:38Z","logger":"pg_basebackup","msg":"pg_basebackup: error: backup failed: ERROR: file name too long for tar format: \"pg_vectors/indexes/0000000000000000000000000000000065de7f3829e7a01800096f010011f2ad/segments/4378fbe3-644b-4937-8671-86878244ed2c\"","pipe":"stderr","logging_pod":"postgres16-8-join"}
{"level":"info","ts":"2024-08-22T14:36:38Z","logger":"pg_basebackup","msg":"pg_basebackup: removing data directory \"/var/lib/postgresql/data/pgdata\"","pipe":"stderr","logging_pod":"postgres16-8-join"}
{"level":"error","ts":"2024-08-22T14:36:38Z","msg":"Error joining node","logging_pod":"postgres16-8-join","error":"error in pg_basebackup, exit status 1","stacktrace":"github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:163\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/join.joinSubCommand\n\tinternal/cmd/manager/instance/join/cmd.go:139\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/join.NewCmd.func2\n\tinternal/cmd/manager/instance/join/cmd.go:72\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/[email protected]/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/[email protected]/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/[email protected]/command.go:1041\nmain.main\n\tcmd/manager/main.go:66\nruntime.main\n\t/opt/hostedtoolcache/go/1.22.5/x64/src/runtime/proc.go:271"}
Error: error in pg_basebackup, exit status 1
I was eventually, with the help of one of the immich devs able to resolve this by removing the index referenced in the error message, joining the instance to the cluster, and recreating the index.
This appears to be tied to the v0.3.0 branch, as I'd never had a problem joining new pg cluster members when operating under 0.2.x
The text was updated successfully, but these errors were encountered:
I hit this same issue and it's blocking my postgres cluster from coming up in a similar scenario. Would it be possible for the devs to merge the linked PR to fix this issue? Looks like it has been approved and checks passed for a few weeks. Barring that, could we at least get a documentation blurb on how to remove/recreate the index?
Hey just wanted to drop this in as a potential issue. I've been using the cloudnative-pg/pgvectors image in order to utilize immich in my homelab. I just had shrink my cluster due to replacing a node, and when the new hardware was up, cnpg was unable to join a new pg instance to the cluster, registering this error:
I was eventually, with the help of one of the immich devs able to resolve this by removing the index referenced in the error message, joining the instance to the cluster, and recreating the index.
This appears to be tied to the v0.3.0 branch, as I'd never had a problem joining new pg cluster members when operating under 0.2.x
The text was updated successfully, but these errors were encountered: