-
Notifications
You must be signed in to change notification settings - Fork 113
Description
What steps did you take and what happened:
When creating a pvc as a clone from a snapshot of another PVC, the clone command never gets send
ZFS command log:
2024-05-19.17:46:29 zfs create -o quota=274877906944 -o recordsize=64k -o dedup=off -o compression=zstd-6 -o encryption=off -o mountpoint=legacy NVME/app-pvc/pvc-b44a3c8a-4cae-4d63-aed1-fafe853f8e8b
2024-05-19.17:46:29 zfs create -o quota=1073741824 -o recordsize=64k -o dedup=off -o compression=zstd-6 -o encryption=off -o mountpoint=legacy NVME/app-pvc/pvc-176eff42-3b84-4ce6-8fe9-a06192320869
2024-05-19.17:46:52 zfs snapshot NVME/app-pvc/pvc-b44a3c8a-4cae-4d63-aed1-fafe853f8e8b@snapshot-08240c13-7fcb-4425-838d-79eacfc27407
2024-05-19.18:57:04 zfs clone NVME/app-pvc/pvc-b44a3c8a-4cae-4d63-aed1-fafe853f8e8b@snapshot-08240c13-7fcb-4425-838d-79eacfc27407 NVME/app-pvc/testclone
Notice "testclone" which was ran manually, the log never showed any attempt at running the clone command, successfull or otherwise.
What did you expect to happen:
It should at least send the clone command, failure or success regardless.
The output of the following commands will help us better understand what's going on:
- All OpenZFS pods and containers are running, healthy, latest and able to create normal PVC defined datasets.
- zfsvolumes cr for origin PVC is present
- zfssnapshot cr for originPVC is present
- zfs snapshot itself for origin PVC is present
- zfs snapshot itself for origin PVC can be cloned manually
- zfsvolume for targetPVC is present
- dataset for targetPVC is missing
- clonecommand never send
zfsvolume cr for targetPVC
Name: pvc-095889bb-2da5-4b40-838e-d54c499b2ab0
Annotations: <none>
API Version: zfs.openebs.io/v1
Kind: ZFSVolume
Spec:
Capacity: 274877906944
Compression: zstd-6
Dedup: off
Encryption: off
Fs Type: zfs
Owner Node ID: ix-truenas
Pool Name: NVME/app-pvc
Recordsize: 64k
Shared: yes
Snapname: pvc-b44a3c8a-4cae-4d63-aed1-fafe853f8e8b@snapshot-08240c13-7fcb-4425-838d-79eacfc27407
Thin Provision: yes
Volume Type: DATASET
Status:
State: Failed
Events: <none>
volume for origin is all correct.
target and origin share the same (known-good) storageClass
-
kubectl logs -f openebs-zfs-controller-f78f7467c-blr7q -n openebs -c openebs-zfs-plugin
Gist logs are to be added, but highlights.
But error is just failing on the Dataset existence check, after creation of the ZFSvolume CR and no descriptive error is output in any of them. Just the fact volume creation failed, which we can also already see on the zfsvolume cr. -
kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin
https://gist.github.com/Ornias1993/9cb23dc0df026233e8d64c74d70bd39a
https://gist.github.com/Ornias1993/991e63fbc41fd68e71a35c2f8f2e3a62
Anything else you would like to add:
Everything works perfectly fine and the PVC is also created fine.
Other PVC creation works fine.
All OpenEBS pods and components are fine
volumesnapshot objects are present and fine, as-well-as ZFS snapshots.
But regardless of this, at least the clone command should've been send.
Environment:
- Verified Latest
- Verified Latest
- Platform independent (tested multiple)
- Debian/TrueNAS/and-others