Skip to content
This repository has been archived by the owner on Mar 3, 2022. It is now read-only.

pool fails to create #572

Open
joshuacox opened this issue Mar 30, 2019 · 17 comments
Open

pool fails to create #572

joshuacox opened this issue Mar 30, 2019 · 17 comments
Assignees

Comments

@joshuacox
Copy link
Contributor

Description

pool fails to create upon applying the yaml file

Expected Behavior

a pool should be created and be readily available for pvc

Current Behavior

k get spc                                                                                                                                1 ↵
NAME                AGE
cstor-sparse-pool   17h
cstor-test-pool     15m
k get csp
NAME                     ALLOCATED   FREE    CAPACITY   STATUS    TYPE      AGE
cstor-sparse-pool-4iji   7.96M       9.93G   9.94G      Healthy   striped   17h
cstor-sparse-pool-f8ur   8.06M       9.93G   9.94G      Healthy   striped   17h
cstor-sparse-pool-wru2   9.22M       9.93G   9.94G      Healthy   striped   17h
k describe spc cstor-test-pool
Name:         cstor-test-pool
Namespace:    
Labels:       <none>
Annotations:  cas.openebs.io/config:
                - name: PoolResourceRequests
                  value: |-
                      memory: 5Gi
                - name: PoolResourceLimits
                  value: |-
                      memory: 5Gi
              kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"openebs.io/v1alpha1","kind":"StoragePoolClaim","metadata":{"annotations":{"cas.openebs.io/config":"- name: PoolResourceRequ...
              openebs.io/spc-lease: {"holder":"","leaderTransition":6}
API Version:  openebs.io/v1alpha1
Kind:         StoragePoolClaim
Metadata:
  Creation Timestamp:  2019-03-30T13:04:11Z
  Generation:          2
  Resource Version:    207629
  Self Link:           /apis/openebs.io/v1alpha1/storagepoolclaims/cstor-test-pool
  UID:                 4f71e9c3-52ec-11e9-a32e-525400e23c11
Spec:
  Capacity:  
  Disks:
    Disk List:    <nil>
  Format:         
  Max Pools:      3
  Min Pools:      0
  Mountpoint:     
  Name:           cstor-test-pool
  Node Selector:  <nil>
  Path:           
  Pool Spec:
    Cache File:         
    Over Provisioning:  false
    Pool Type:          striped
  Type:                 disk
Status:
  Phase:  
Events:   <none>

Steps to Reproduce

kubectl config set-context admin-ctx --user=cluster-admin

kubectl config use-context admin-ctx

helm install --namespace openebs --name openebs stable/openebs

kubectl apply -f test-pool.yaml

cat test-pool.yaml 
---
apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
  name: cstor-test-pool
  annotations:
    cas.openebs.io/config: |
      - name: PoolResourceRequests
        value: |-
            memory: 5Gi
      - name: PoolResourceLimits
        value: |-
            memory: 5Gi
spec:
  name: cstor-test-pool
  type: disk
  maxPools: 3
  poolSpec:
    poolType: striped
---

Your Environment

  • kubectl get nodes:
NAME                 STATUS   ROLES    AGE   VERSION
test-ingress1   Ready    node     18h   v1.13.5
test-master1    Ready    master   18h   v1.13.5
test-master2    Ready    master   18h   v1.13.5
test-master3    Ready    master   18h   v1.13.5
test-node1      Ready    node     18h   v1.13.5
test-node2      Ready    node     18h   v1.13.5
test-node3      Ready    node     18h   v1.13.5
  • kubectl get pods --all-namespaces:
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-656gq                    1/1     Running   0          18h
kube-system   coredns-86c58d9df4-k7p9d                    1/1     Running   0          18h
kube-system   kube-apiserver-thalhalla-master1            1/1     Running   0          18h
kube-system   kube-apiserver-thalhalla-master2            1/1     Running   0          18h
kube-system   kube-apiserver-thalhalla-master3            1/1     Running   0          18h
kube-system   kube-controller-manager-thalhalla-master1   1/1     Running   3          18h
kube-system   kube-controller-manager-thalhalla-master2   1/1     Running   1          18h
kube-system   kube-controller-manager-thalhalla-master3   1/1     Running   1          18h
kube-system   kube-flannel-ds-amd64-hvw2h                 1/1     Running   0          18h
kube-system   kube-flannel-ds-amd64-jw77q                 1/1     Running   0          18h
kube-system   kube-flannel-ds-amd64-km4z8                 1/1     Running   0          18h
kube-system   kube-flannel-ds-amd64-lpjhv                 1/1     Running   0          18h
kube-system   kube-flannel-ds-amd64-mfmv7                 1/1     Running   1          18h
kube-system   kube-flannel-ds-amd64-wgw7k                 1/1     Running   1          18h
kube-system   kube-flannel-ds-amd64-xmf46                 1/1     Running   0          18h
kube-system   kube-proxy-4hvmg                            1/1     Running   0          18h
kube-system   kube-proxy-bz4cw                            1/1     Running   0          18h
kube-system   kube-proxy-ffkg8                            1/1     Running   0          18h
kube-system   kube-proxy-h9c28                            1/1     Running   0          18h
kube-system   kube-proxy-hmnmx                            1/1     Running   0          18h
kube-system   kube-proxy-q2cv5                            1/1     Running   0          18h
kube-system   kube-proxy-sv2sd                            1/1     Running   0          18h
kube-system   kube-scheduler-thalhalla-master1            1/1     Running   1          18h
kube-system   kube-scheduler-thalhalla-master2            1/1     Running   0          18h
kube-system   kube-scheduler-thalhalla-master3            1/1     Running   2          18h
kube-system   tiller-deploy-6cf89f5895-sxqg7              1/1     Running   0          17h
openebs       cstor-sparse-pool-4iji-688867f488-9z27l     3/3     Running   0          17h
openebs       cstor-sparse-pool-f8ur-589cd4d867-5qdvl     3/3     Running   0          17h
openebs       cstor-sparse-pool-wru2-6877574766-prq6m     3/3     Running   0          17h
openebs       maya-apiserver-7b567ddf8-ng26f              1/1     Running   0          17h
openebs       openebs-admission-server-589d7b7dc-l5zbg    1/1     Running   0          17h
openebs       openebs-ndm-c4x5j                           1/1     Running   0          17h
openebs       openebs-ndm-lrvnx                           1/1     Running   0          17h
openebs       openebs-ndm-lzl7l                           1/1     Running   0          17h
openebs       openebs-ndm-rng2p                           1/1     Running   0          17h
openebs       openebs-provisioner-c44b4fbfc-lz4xh         1/1     Running   3          17h
openebs       openebs-snapshot-operator-f79dbfd48-wctsl   2/2     Running   2          17h
  • kubectl get services:
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   18h
  • kubectl get sc:
NAME                        PROVISIONER                                                AGE
openebs-cstor-sparse        openebs.io/provisioner-iscsi                               17h
openebs-jiva-default        openebs.io/provisioner-iscsi                               17h
openebs-mongodb             openebs.io/provisioner-iscsi                               17h
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   17h
openebs-standalone          openebs.io/provisioner-iscsi                               17h
openebs-standard            openebs.io/provisioner-iscsi                               17h
  • kubectl get pv:
No resources found.
  • kubectl get pvc:
No resources found.
  • OS (from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (from uname -a):
Linux testor 4.4.0-142-generic #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

EDIT: added in the k get spc and k get csp data to current behavior

EDIT2: added in installation notes into steps to reproduce

@kmova
Copy link
Contributor

kmova commented Mar 30, 2019

@joshuacox -- Can you also provide the k get sp from the above setup?

@joshuacox
Copy link
Contributor Author

joshuacox commented Mar 30, 2019

@kmova, you bet:

k get sp
NAME                     AGE
cstor-sparse-pool-4acr   6m
cstor-sparse-pool-8vq5   6m
cstor-sparse-pool-954w   5m
default                  6m

ps - it's only 6m old now, because I just rebuilt the cluster and tried fresh (same results)

@kmova
Copy link
Contributor

kmova commented Mar 30, 2019

kubectl get disks

@joshuacox
Copy link
Contributor Author

kubectl get disks
NAME                                      SIZE          STATUS   AGE
sparse-171b12ac9ec31a7b49bac882fa815f02   10737418240   Active   47m
sparse-2d70b30a024ff4baba3f5ad65e285679   10737418240   Active   47m
sparse-41f212db2485a3b34f535f80d34850da   10737418240   Active   47m
sparse-c9adfb6002a50dbf84c066bb3b44656d   10737418240   Active   47m

@joshuacox
Copy link
Contributor Author

@kmova I kind of feel like I missed a step by following:

https://docs.openebs.io/docs/next/configurepools.html#auto-mode

after the install.

Or perhaps the disk are full or something, but checking on all the nodes they all have over 50 gigs free.

@kmova
Copy link
Contributor

kmova commented Mar 31, 2019

@joshuacox -- the above kubectl get disks indicate that the nodes don't have any additional block devices attached to the node, other than the OS disk. The 50G free, is that on the OS mount?

(or) do have additional block devices attached to the kubernetes nodes, that are not getting detected by NDM? Checkout the lsblk on the node.

When cStor SPC is configured with type: disk, it waits for additional disks to be available to provision the cStor pools.

I am inclining towards the docs being misleading and needs to clearly specify that cStor pools can be used only when the Kubernetes nodes have additional block devices available.

@joshuacox
Copy link
Contributor Author

@kmova indeed, I was assuming that under the #auto instructions that more sparse volumes were created.

But some documentation on some different production setups would be very interesting, perhaps with something simple setups with another linux host as iscsi target, to something more complicated like freeNAS, or even a hardware SAN like a dull compellent.

@kmova
Copy link
Contributor

kmova commented Mar 31, 2019

I will re-open and move this issue to the docs, so we can fix this with examples. As always thank you @joshuacox for helping with making the product better!

@kmova kmova reopened this Mar 31, 2019
@kmova kmova transferred this issue from openebs/openebs Mar 31, 2019
@ranjithwingrider
Copy link
Contributor

@joshuacox I will try to simply the description about cStor pool creation in documentation and will send you the updated page once it is reflected in the documentation. Thank you for your feedback. Hope now you are able to provision PVC using cStor Sparse pool. Please feel free to ask questions/suggestions through github or in openebs community (https://openebs.io/join-our-community).

@ranjithwingrider ranjithwingrider self-assigned this Apr 1, 2019
@joshuacox
Copy link
Contributor Author

joshuacox commented Apr 1, 2019

@ranjithwingrider as a bareminimum on one of the nodes in the cluster something like

apt -y install tgt
mkfs.ext4 -L iscsi01 /dev/vdb1   #on baremetal it might look like /dev/sdb
mkdir -p /srv/iscsi01
mount /dev/vdb1 /srv/iscsi01
dd if=/dev/zero of=/srv/iscsi01/disk.img count=0 bs=1 seek=10G
echo '<target iqn.2019-05.iscsi01.srv:dlp.iscsi01>
  backing-store /srv/iscsi01/disk.img
  initiator-name iqn.2019-05.iscsi01.srv:www.initiator01
  incominguser username password123
</target>' > /etc/tgt/conf.d/iscsi01.conf
systemctl restart tgt

after that I now get a new entry in:

k get disk
NAME                                      SIZE          STATUS   AGE
disk-4160bd592d17aaa58d4f369e7beb13c0     59055800320   Active   1h
sparse-171b12ac9ec31a7b49bac882fa815f02   10737418240   Active   2d
sparse-2d70b30a024ff4baba3f5ad65e285679   10737418240   Active   2d
sparse-41f212db2485a3b34f535f80d34850da   10737418240   Active   2d
sparse-c9adfb6002a50dbf84c066bb3b44656d   10737418240   Active   2d

@joshuacox
Copy link
Contributor Author

@kmova @ranjithwingrider is it possible to have openEBS use an external SAN over fiber-channel directly as an iSCSI target?

@ranjithwingrider
Copy link
Contributor

@joshuacox Yes. It is possible I guess. It is mentioned in #487. I will confirm from the respective team.
From the FAQ, it is mentioned that "Any block disks available on the node (that can be listed with say lsblk) will be discovered by OpenEBS"
Just adding some more details about NDM for reference: https://docs.openebs.io/docs/next/faq.html#how-openebs-detects-disks-for-creating-cstor-pool
https://docs.openebs.io/docs/next/faq.html#what-must-be-the-disk-mount-status-on-node-for-provisioning-openebs-volume

@ranjithwingrider
Copy link
Contributor

@joshuacox Yes It is Possible.

@joshuacox
Copy link
Contributor Author

joshuacox commented Apr 8, 2019

Let me rephrase the question a bit, is it possible for me to feed openebs the address for an iscsi target, and perhaps some chap creds, and have openEBS use that iscsi target directly?

In addition to the above method that adds a virtual disk directly to a node, I also have a freeNAS VM running with a similar disk attached to it. Which if someone got here by google and is attempting to follow along would look something like.

  1. Add virtual disk to freeNAS
  2. Add new disk to pool in --> http://myfreeNAS.host/ui/storage/pools
  3. Add pool to extent in --> /ui/sharing/iscsi/extent
  4. Add portal --> /ui/sharing/iscsi/portals/add
  5. Define initiators (ALL for now) --> /ui/sharing/iscsi/initiator
  6. Add target specifying portal from above --> /ui/sharing/iscsi/target
  7. Finally add an associated target --> /ui/sharing/iscsi/associatedtarget

Now here is where I'm a bit confused, I want to give openEBS the target address/hostname/ip and some chap info. Following the documentation links that @ranjithwingrider has given above leads me to believe that I am supposed to log into a node on the cluster and

iscsiadm -m discovery -t sendtargets -p myfreeNAS.host 
iscsiadm -m node --login

However, this results in:

  1. iSCSI connection from the container to the node I just performed the iscsiadm commands on
  2. A second iSCSI connection from the node to the freeNAS target to mount the resulting /dev/sdX block device directly onto the node

Or am I misunderstanding what is going on with openEBS? What I want is a single iSCSI connection not two daisy chained to minimize latency etc. Am I overthinking this?

@kmova
Copy link
Contributor

kmova commented Apr 9, 2019

@joshuacox -- may be the following will help clarify this. I see two options:

(a) Application Pod -> PV (external iscsi target ). In this case, OpenEBS just helps to create a PV by filling in the details of the iscsi target that were provided to it via some configuration. In this case, if the Application Pod moves from one node to another, the associated external target also is disconnected from old node to the new node.

(b) Application Pod -> PV (openebs iscsi target ) -> OpenEBS Jiva/cStor Target -> OpenEBS Jiva/cStor Replica -> (external iscsi target device). In this case, each node is mounted with multiple external disk targets. OpenEBS will treat these external targets as local disks attached to the nodes. From within the cluster one or more OpenEBS volumes can be created that will effectively save the data onto the external targets. In this case, the external targets are pinned to the nodes where they are connected.

Both of the above options are currently not supported by OpenEBS. But there is an enhancement request for supporting (b). openebs-archive/node-disk-manager#17

@joshuacox
Copy link
Contributor Author

@kmova I think option B is very interesting from a performance standpoint.

However, I think option A is what I'm looking for as I really want disk activity off of the nodes and let the SAN carry that burden (including replication). Should I open up a second enhancement request?

@kmova
Copy link
Contributor

kmova commented Apr 16, 2019

@joshuacox - A new request would be great for supporting - A.

Also, what is the SAN that you have in mind? I also want to explore if there are some specific solutions around that SAN itself. For example: https://github.com/embercsi/ember-csi

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants