Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have problem to open volumes on my external gluster #34

Open
rafalkasa opened this issue Sep 19, 2021 · 10 comments
Open

I have problem to open volumes on my external gluster #34

rafalkasa opened this issue Sep 19, 2021 · 10 comments

Comments

@rafalkasa
Copy link

rafalkasa commented Sep 19, 2021

When I try to run the gdash for my server I have receive exception when I try to open volumes in UI
In my opinion problem is that xml generated on my machine not include section inodesTotal
'inodes_total': int(node_el.find('inodesTotal').text),

[19/Sep/2021:20:28:17] HTTP
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/parsers.py", line 268, in _parse_volume_status
    nodes.append(_parse_a_node(node_el))
  File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/parsers.py", line 237, in _parse_a_node
    'inodes_total': int(node_el.find('inodesTotal').text),
AttributeError: 'NoneType' object has no attribute 'text'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/cherrypy/_cprequest.py", line 638, in respond
    self._do_respond(path_info)
  File "/usr/local/lib/python3.8/dist-packages/cherrypy/_cprequest.py", line 697, in _do_respond
    response.body = self.handler()
  File "/usr/local/lib/python3.8/dist-packages/cherrypy/lib/encoding.py", line 223, in __call__
    self.body = self.oldhandler(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/cherrypy/lib/jsontools.py", line 59, in json_handler
    value = cherrypy.serving.request._json_inner_handler(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/cherrypy/_cpdispatch.py", line 54, in __call__
    return self.callable(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/dist-packages/gdash/__main__.py", line 99, in volumes
    return volume.status_detail(group_subvols=True)
  File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/volume.py", line 187, in status_detail
    return parse_volume_status(volume_execute_xml(cmd),
  File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/parsers.py", line 276, in parse_volume_status
    nodes_data = _parse_volume_status(status_data)
  File "/usr/local/lib/python3.8/dist-packages/glustercli/cli/parsers.py", line 270, in _parse_volume_status
    raise GlusterCmdOutputParseError(err)
glustercli.cli.parsers.GlusterCmdOutputParseError: 'NoneType' object has no attribute 'text'
[19/Sep/2021:20:28:17] HTTP
Request Headers:
  Remote-Addr: 192.168.10.19
  HOST: glusterfs1:8080
  CONNECTION: keep-alive
  ACCEPT: application/json, text/plain, */*
  USER-AGENT: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36
  REFERER: http://glusterfs1:8080/volumes
  ACCEPT-ENCODING: gzip, deflate
  ACCEPT-LANGUAGE: en,pl-PL;q=0.9,pl;q=0.8,en-GB;q=0.7,en-US;q=0.6
  COOKIE: session_id=0b9e47c8ce86c8fdbd11aa1af14a9db7f9f9e03e
192.168.10.19 - - [19/Sep/2021:20:28:17] "GET /api/volumes HTTP/1.1" 500 2754 "http://glusterfs1:8080/volumes" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36"

gluster vol status test-volume detail --xml

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>test-volume</volName>
        <nodeCount>2</nodeCount>
        <node>
          <hostname>glusterfs1</hostname>
          <path>/gluster/test</path>
          <peerid>8e119499-ab8d-4715-bace-2f16bfe23293</peerid>
          <status>1</status>
          <port>49159</port>
          <ports>
            <tcp>49159</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>6014</pid>
          <sizeTotal>869729808384</sizeTotal>
          <sizeFree>703918551040</sizeFree>
          <device>/dev/sdb1</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,relatime,compress-force=lzo,space_cache,subvolid=5,subvol=/</mntOptions>
          <fsName>btrfs</fsName>
          <inodeSize>btrfs</inodeSize>
        </node>
        <node>
          <hostname>glusterfs2</hostname>
          <path>/gluster/test</path>
          <peerid>8f5ef325-5a77-473b-8d5f-b2258440ac58</peerid>
          <status>1</status>
          <port>49159</port>
          <ports>
            <tcp>49159</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>108320</pid>
          <sizeTotal>869729808384</sizeTotal>
          <sizeFree>714670706688</sizeFree>
          <device>/dev/sdb1</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,relatime,compress-force=lzo,space_cache,subvolid=5,subvol=/</mntOptions>
          <fsName>btrfs</fsName>
          <inodeSize>btrfs</inodeSize>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Below is version of installed glusterfs components: apt list --installed | grep gluster

glusterfs-client/focal,now 9.3-ubuntu1~focal1 amd64 [installed]
glusterfs-common/focal,now 9.3-ubuntu1~focal1 amd64 [installed]
glusterfs-server/focal,now 9.3-ubuntu1~focal1 amd64 [installed]
libglusterd0/focal,now 9.3-ubuntu1~focal1 amd64 [installed,automatic]
libglusterfs0/focal,now 9.3-ubuntu1~focal1 amd64 [installed,automatic]
@alioualarbi
Copy link

alioualarbi commented Oct 5, 2021

Facing 500 internal error for gluster volume status

Hello,
I'm having the same problem, facing 500 internal error , i'm using glustercli=0.8.0.

Volume info

gluster volume status

Brick 192.168.1.4:/gluster-storage          49152     0          Y       2089 
Brick 192.168.1.5:/gluster-storage          49152     0          Y       20242
Self-heal Daemon on localhost               N/A       N/A        Y       2109 
Self-heal Daemon on 192.168.1.5             N/A       N/A        Y       20265

xml version

gluster vol status test-volume detail --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>eq-volume</volName>
        <nodeCount>2</nodeCount>
        <node>
          <hostname>192.168.1.4</hostname>
          <path>/gluster-storage</path>
          <peerid>d6d8ad93-53fd-4cd0-8f14-441049bf658d</peerid>
          <status>1</status>
          <port>49152</port>
          <ports>
            <tcp>49152</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2089</pid>
          <sizeTotal>52775301120</sizeTotal>
          <sizeFree>49865756672</sizeFree>
          <device>/dev/sda1</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,relatime,data=ordered</mntOptions>
          <fsName>ext4</fsName>
        </node>
        <node>
          <hostname>192.168.1.5</hostname>
          <path>/gluster-storage</path>
          <peerid>efb207cc-4b61-4d0b-a011-f96039d3de86</peerid>
          <status>1</status>
          <port>49152</port>
          <ports>
            <tcp>49152</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>20242</pid>
          <sizeTotal>52775301120</sizeTotal>
          <sizeFree>51083567104</sizeFree>
          <device>/dev/sda1</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,relatime,data=ordered</mntOptions>
          <fsName>ext4</fsName>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

glusterfs version on debian 9 stretch:

glusterfs-client/oldoldstable,now 3.8.8-1 amd64 [installed]
glusterfs-common/oldoldstable,now 3.8.8-1 amd64 [installed,automatic]
glusterfs-server/oldoldstable,now 3.8.8-1 amd64 [installed]

Just calling the info and status_details function using glustercli give the error below:

root@gluster-master:~# python3 gluster-volume.py 
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/glustercli/cli/parsers.py", line 268, in _parse_volume_status
    nodes.append(_parse_a_node(node_el))
  File "/usr/local/lib/python3.5/dist-packages/glustercli/cli/parsers.py", line 237, in _parse_a_node
    'inodes_total': int(node_el.find('inodesTotal').text),
AttributeError: 'NoneType' object has no attribute 'text'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "gluster-volume.py", line 61, in <module>
    status_detail()
  File "gluster-volume.py", line 58, in status_detail
    return parse_volume_status(volume_execute_xml(cmd),info(volname),group_subvols=group_subvols)
  File "/usr/local/lib/python3.5/dist-packages/glustercli/cli/parsers.py", line 276, in parse_volume_status
    nodes_data = _parse_volume_status(status_data)
  File "/usr/local/lib/python3.5/dist-packages/glustercli/cli/parsers.py", line 270, in _parse_volume_status
    raise GlusterCmdOutputParseError(err)
glustercli.cli.parsers.GlusterCmdOutputParseError: 'NoneType' object has no attribute 'text'

i'm using gdash to get the gluster information, any advice?

@aravindavk
Copy link
Member

What is the version of GlusterFS you are using? Looks like latest version is having this key in xml.

@alioualarbi
Copy link

here the version

sudo glusterfs --version
glusterfs 3.8.8 built on Jan 11 2017 14:07:11
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

@rafalkasa
Copy link
Author

In my case it is:

sudo glusterfs --version
glusterfs 9.3
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

Below is version of installed glusterfs components: apt list --installed | grep gluster

glusterfs-client/focal,now 9.3-ubuntu1~focal1 amd64 [installed]
glusterfs-common/focal,now 9.3-ubuntu1~focal1 amd64 [installed]
glusterfs-server/focal,now 9.3-ubuntu1~focal1 amd64 [installed]
libglusterd0/focal,now 9.3-ubuntu1~focal1 amd64 [installed,automatic]
libglusterfs0/focal,now 9.3-ubuntu1~focal1 amd64 [installed,automatic]

@alioualarbi
Copy link

@aravindavk, any progress?

@projx
Copy link

projx commented Nov 7, 2021

In some situations it looks like the inodes_free and inodes_total is not being returned by Gluster, but only on certain nodes - For example, I run a cluster which consists of 3 VMS and 1 Docker container. The issue only arises with data generated from the Container (running on my Synology), the other nodes return it perfectly fine. Where I see the exact issue OP raise, where I suspect they are doing the same as me:

          <fsName>btrfs</fsName>
          <inodeSize>btrfs</inodeSize>

The fix for this is to simply add checks to the glustercli/cli/parsers.py.. I've taken a stab at fixing this, not sure if you want to give it a try, you should be able to just drop this in: https://github.com/projx/glustercli-python

See the parsers.py line 232, I've altered it to this:

def _check_node_value(node_el, key, type, default_value):
    value = node_el.find(key)
    if value is not None:
        return type(value.text)
    return type(default_value)

def _parse_a_node(node_el):
    name = (node_el.find('hostname').text + ":" + node_el.find('path').text)
    online = node_el.find('status').text == "1" or False
    if not online:
        # if the node where the brick exists isn't
        # online then no reason to continue as the
        # caller of this method will populate "default"
        # information
        return {'name': name, 'online': online}

    value = {
        'name': name,
        'uuid': node_el.find('peerid').text,
        'online': online,
        'pid': node_el.find('pid').text,
        'size_total': int(node_el.find('sizeTotal').text),
        'size_free': int(node_el.find('sizeFree').text),
        'inodes_total': _check_node_value(node_el, 'inodesTotal', int, 0),   
        'inodes_free': _check_node_value(node_el, 'inodesFree', int, 0),
        'device': node_el.find('device').text,
        'block_size': node_el.find('blockSize').text,
        'mnt_options': node_el.find('mntOptions').text,
        'fs_name': node_el.find('fsName').text,
    }

@aravindavk
Copy link
Member

@projx Changes looks good till the issue fixed from Gluster side. Are you planning to send it as PR?

Thanks

@projx
Copy link

projx commented Nov 7, 2021

Assuming you run that repo - If you want it, sure, otherwise just feel free to copy and paste it.

Otherwise then yes I can, and wait to see if they pick it up.

@aravindavk
Copy link
Member

If you want it, sure, otherwise just feel free to copy and paste it.

Please send PR. I will merge it and make a new release.

@aravindavk
Copy link
Member

Sent PR to fix the inodeSize issue gluster/glusterfs#2937 Still not sure about the missing inodeTotal and other fields.

Is this issue specific to btrfs backend or with other filesystem as well?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants