Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

csi-sanity panics instead of reporting proper failure if driver fails CreateVolume from Snapshot #569

Open
ebblake opened this issue Jan 20, 2025 · 0 comments · May be fixed by #570
Open

Comments

@ebblake
Copy link

ebblake commented Jan 20, 2025

I'm incrementally developing a CSI driver. So far, I have the snapshot creation/deletion side of CREATE_DELETE_SNAPSHOT working, but not yet the CreateVolume from an existing snapshot source, so my code intentionally fails with gRPC error UNIMPLEMENTED if it detects an attempt to create a volume from a snapshot source. I was hoping to see how close my partial implementation was to csi-sanity, and was surprised to see the test panic rather than report failure:

Controller Service [Controller Server] CreateVolume should create volume from an
 existing source snapshot
/csi-sanity/csi-test-5.3.0/pkg/sanity/controller.go:553
  STEP: reusing connection to CSI driver at /var/lib/kubelet/plugins/kubesan-nod
e/socket @ 01/20/25 21:49:35.745
  STEP: reusing connection to CSI driver controller at /var/lib/kubelet/plugins/
kubesan-controller/socket @ 01/20/25 21:49:35.745
  STEP: creating mount and staging directories @ 01/20/25 21:49:35.745
  STEP: creating a volume @ 01/20/25 21:49:35.746
  STEP: creating a snapshot from the volume @ 01/20/25 21:49:38.452
  STEP: creating a volume from source snapshot @ 01/20/25 21:49:41.121
  [PANICKED] in [It] - /usr/local/go/src/runtime/panic.go:261 @ 01/20/25 21:49:41.124
I0120 21:49:41.124952   13835 resources.go:301] deleting snapshot ID kubesan-snapshot-f8fa283110a8f6e8edcc00e66bc271be
<E2><80><A2> [PANICKED] [17.018 seconds]
Controller Service [Controller Server] CreateVolume [It] should create volume from an existing source snapshot
/csi-sanity/csi-test-5.3.0/pkg/sanity/controller.go:553

  [PANICKED] Test Panicked
  In [It] at: /usr/local/go/src/runtime/panic.go:261 @ 01/20/25 21:49:41.124

  runtime error: invalid memory address or nil pointer dereference

  Full Stack Trace
    github.com/kubernetes-csi/csi-test/v5/pkg/sanity.init.func1.6.9()
        /csi-sanity/csi-test-5.3.0/pkg/sanity/controller.go:584 +0x3b4

And reading the actual test in question, the cause of the panic is rather obvious:

			vol, err := r.CreateVolume(context.Background(), vol2Req)
			Expect(vol.GetVolume().ContentSource).NotTo(BeNil())
			Expect(err).NotTo(HaveOccurred())

https://github.com/kubernetes-csi/csi-test/blob/master/pkg/sanity/controller.go#L583

Tests should always inspect Expect(err) lines PRIOR to Expect(vol.GetVolume().XXX) calls, since vol.GetVolume() will be nil if the driver returns an (unexpected) error.

ebblake added a commit to ebblake/csi-test that referenced this issue Jan 20, 2025
If a driver advertises CREATE_DELETE_SNAPSHOT but fails to obey the
part of the spec about allowing CreateVolume from a snapshot (such as
during an incremental development of a driver), the testsuite was
panicking on a nil dereference instead of diagnosing that an
unexpected error was returned.

Fixes: kubernetes-csi#569
Signed-off-by: Eric Blake <[email protected]>
@ebblake ebblake linked a pull request Jan 20, 2025 that will close this issue
ebblake added a commit to ebblake/csi-test that referenced this issue Jan 20, 2025
If a driver advertises CREATE_DELETE_SNAPSHOT but fails to obey the
part of the spec about allowing CreateVolume from a snapshot (such as
during an incremental development of a driver), the testsuite was
panicking on a nil dereference instead of diagnosing that an
unexpected error was returned.

Fixes: kubernetes-csi#569
Signed-off-by: Eric Blake <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant