-
Notifications
You must be signed in to change notification settings - Fork 137
pool fails to create #572
Comments
@joshuacox -- Can you also provide the |
@kmova, you bet:
ps - it's only 6m old now, because I just rebuilt the cluster and tried fresh (same results) |
|
|
@kmova I kind of feel like I missed a step by following: https://docs.openebs.io/docs/next/configurepools.html#auto-mode after the install. Or perhaps the disk are full or something, but checking on all the nodes they all have over 50 gigs free. |
@joshuacox -- the above (or) do have additional block devices attached to the kubernetes nodes, that are not getting detected by NDM? Checkout the When cStor SPC is configured with I am inclining towards the docs being misleading and needs to clearly specify that cStor pools can be used only when the Kubernetes nodes have additional block devices available. |
@kmova indeed, I was assuming that under the #auto instructions that more sparse volumes were created. But some documentation on some different production setups would be very interesting, perhaps with something simple setups with another linux host as iscsi target, to something more complicated like freeNAS, or even a hardware SAN like a dull compellent. |
I will re-open and move this issue to the docs, so we can fix this with examples. As always thank you @joshuacox for helping with making the product better! |
@joshuacox I will try to simply the description about cStor pool creation in documentation and will send you the updated page once it is reflected in the documentation. Thank you for your feedback. Hope now you are able to provision PVC using cStor Sparse pool. Please feel free to ask questions/suggestions through github or in openebs community (https://openebs.io/join-our-community). |
@ranjithwingrider as a bareminimum on one of the nodes in the cluster something like
after that I now get a new entry in:
|
@kmova @ranjithwingrider is it possible to have openEBS use an external SAN over fiber-channel directly as an iSCSI target? |
@joshuacox Yes. It is possible I guess. It is mentioned in #487. I will confirm from the respective team. |
@joshuacox Yes It is Possible. |
Let me rephrase the question a bit, is it possible for me to feed openebs the address for an iscsi target, and perhaps some chap creds, and have openEBS use that iscsi target directly? In addition to the above method that adds a virtual disk directly to a node, I also have a freeNAS VM running with a similar disk attached to it. Which if someone got here by google and is attempting to follow along would look something like.
Now here is where I'm a bit confused, I want to give openEBS the target address/hostname/ip and some chap info. Following the documentation links that @ranjithwingrider has given above leads me to believe that I am supposed to log into a node on the cluster and
However, this results in:
Or am I misunderstanding what is going on with openEBS? What I want is a single iSCSI connection not two daisy chained to minimize latency etc. Am I overthinking this? |
@joshuacox -- may be the following will help clarify this. I see two options: (a) Application Pod -> PV (external iscsi target ). In this case, OpenEBS just helps to create a PV by filling in the details of the iscsi target that were provided to it via some configuration. In this case, if the Application Pod moves from one node to another, the associated external target also is disconnected from old node to the new node. (b) Application Pod -> PV (openebs iscsi target ) -> OpenEBS Jiva/cStor Target -> OpenEBS Jiva/cStor Replica -> (external iscsi target device). In this case, each node is mounted with multiple external disk targets. OpenEBS will treat these external targets as local disks attached to the nodes. From within the cluster one or more OpenEBS volumes can be created that will effectively save the data onto the external targets. In this case, the external targets are pinned to the nodes where they are connected. Both of the above options are currently not supported by OpenEBS. But there is an enhancement request for supporting (b). openebs-archive/node-disk-manager#17 |
@kmova I think option B is very interesting from a performance standpoint. However, I think option A is what I'm looking for as I really want disk activity off of the nodes and let the SAN carry that burden (including replication). Should I open up a second enhancement request? |
@joshuacox - A new request would be great for supporting - A. Also, what is the SAN that you have in mind? I also want to explore if there are some specific solutions around that SAN itself. For example: https://github.com/embercsi/ember-csi |
Description
pool fails to create upon applying the yaml file
Expected Behavior
a pool should be created and be readily available for pvc
Current Behavior
Steps to Reproduce
kubectl config set-context admin-ctx --user=cluster-admin
kubectl config use-context admin-ctx
helm install --namespace openebs --name openebs stable/openebs
kubectl apply -f test-pool.yaml
Your Environment
kubectl get nodes
:kubectl get pods --all-namespaces
:kubectl get services
:kubectl get sc
:kubectl get pv
:kubectl get pvc
:/etc/os-release
):uname -a
):EDIT: added in the
k get spc
andk get csp
data to current behaviorEDIT2: added in installation notes into steps to reproduce
The text was updated successfully, but these errors were encountered: