Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One-time snapshots do not save locally and are empty zip files in s3 #1564

Open
krumware opened this issue Aug 16, 2019 · 5 comments
Open

One-time snapshots do not save locally and are empty zip files in s3 #1564

krumware opened this issue Aug 16, 2019 · 5 comments

Comments

@krumware
Copy link

RKE version:

v0.2.7

Docker version: (docker version,docker info preferred)

18.09.7 (local machine)

Operating system and kernel: (cat /etc/os-release, uname -r preferred)
WSL 2 Ubuntu

NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
4.19.57-microsoft-standard

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)

AWS

cluster.yml file:

nodes:
  - address: xxx.xxx.xxx.xxx
    internal_address: xxx.xxx.xxx.xxx
    user: rancher
    role: [controlplane,worker,etcd]
    ssh_key_path: ./rancher-2-ha-server.pem
  - address: xxx.xxx.xxx.xxx
    internal_address: xxx.xxx.xxx.xxx
    user: rancher
    role: [controlplane,worker,etcd]
    ssh_key_path: ./rancher-2-ha-server.pem
  - address: xxx.xxx.xxx.xxx
    internal_address: xxx.xxx.xxx.xxx
    user: rancher
    role: [controlplane,worker,etcd]
    ssh_key_path: ./rancher-2-ha-server.pem

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

Steps to Reproduce:

  1. run rke etcd snapshot-save --config rancher-cluster.yml
  2. wait for INFO[0037] Finished saving snapshot [rancher-2-ha-backup] on all etcd hosts
  3. check /opt/rke/etcd-snapshots

-or-

  1. run rke etcd snapshot-save --config rancher-cluster.yml --name rancher-2-ha-backup \ --s3 --access-key xxx --secret-key xxx \ --bucket-name xxx --s3-endpoint s3.amazonaws.com
  2. wait for INFO[0037] Finished saving snapshot [rancher-2-ha-backup] on all etcd hosts
  3. check s3 bucket
  4. download rancher-2-ha-backup.zip file from s3 bucket

Results:

directory is empty

-or-

rancher-2-ha-backup.zip is empty

** additional info **

I also observed this behavior with v0.2.4
Rancher version is rancher-latest 2.2.7
There is no additional --debug flag available for this command. All rke output appears as though everything is normal and has completed successfully.
Followed the steps in the documentation here: https://rancher.com/docs/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots

@deniseschannon deniseschannon added this to the v0.3.0 milestone Aug 16, 2019
@sowmyav27
Copy link

sowmyav27 commented Aug 16, 2019

Validated using v0.2.7

The nodes were of Ubuntu, 16.04 LTS, images
rancher-cluster.yml

nodes:
  - address: x.x.x.x
    internal_address: x.x.x.x
    user: ubuntu
    role: [controlplane,etcd,worker]
    ssh_key_path:<path-to-key>
  - address: x.x.x.x
    internal_address: x.x.x.x
    user: ubuntu
    role: [controlplane,etcd,worker]
    ssh_key_path: <path-to-key>
  - address: x.x.x.x
    internal_address: x.x.x.x
    user: ubuntu
    role: [controlplane,etcd,worker]
    ssh_key_path: <path-to-key>

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h
  • do an rke up
  • cluster is built successfully
  • ./rke etcd snapshot-save --config rancher-cluster.yml -- local snapshots are taken successfully. Can be seen on the etcd nodes in /opt/rke/etcd-snapshots directory
  • ./rke etcd snapshot-save --config rancher-cluster.yml --name sowmya-snapshots2 --s3 --access-key xxxxx --secret-key xxxxx --bucket-name xxxx --s3-endpoint xxxx -- snapshots are seen saved on the S3 bucket.

@krumware
Copy link
Author

krumware commented Aug 16, 2019

May be worth clarifying that the nodes are RancherOS. Also, the .zip is created in s3, just has no file contents.

Is there a way to opt in to a more verbose debug/logging level?

@sowmyav27
Copy link

sowmyav27 commented Aug 16, 2019

verified in a RancherOS instance.
rke v0.2.7
image - rancheros-v1.5.0-hvm-1

  • Able to take local snapshots and S3 snapshots.
  • .zip files are created. Unzipped the file. Folder has a corresponding file.
  • Able to restore from the snapshot
    ./rke etcd snapshot-restore --config rancher-cluster.yml --name snapshot-name3 --s3 --access-key xxx --secret-key xxx --bucket-name xxx --s3-endpoint s3.amazonaws.com

@deniseschannon deniseschannon removed this from the v0.3.0 milestone Aug 17, 2019
@krumware
Copy link
Author

I'd like to help track this down, or at least narrow this down to something environmental. Is there any additional information I can provide, or a way to change the logging level?

@krumware
Copy link
Author

@sowmyav27 updated information:
the file size of the zip file in AWS S3 is approximately 20.5 MB.
When I download on a windows machine and attempt extract, it appears to be empty. The default windows extraction tool thinks that the zip file is corrupt, and browsing the zip file makes it appear to be empty. But if I use a third party extraction tool such as 7zip, then it does properly extract with a backup folder and the file included inside.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants