Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

md_iostat_ does not distinguish disks when run with multiple raids #978

Open
oneiros-de opened this issue Mar 31, 2019 · 1 comment
Open

Comments

@oneiros-de
Copy link

I have two md raid5:

md0 : active raid5 sdc1[7] sdb1[4] sdd1[6] sda1[5]
      5860505856 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 3/15 pages [12KB], 65536KB chunk

md1 : active raid5 sdc2[2] sdb2[1] sda2[0] sdd2[4]
      2928158976 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

But md_iostat does not get the right disks for these arrays:

> sudo munin-run md_iostat_md0 config
graph_title IOstat for md0
graph_args --base 1024 -l 0
graph_vlabel blocks / ${graph_period} read (-) / written (+)
graph_category disk
graph_info This graph shows the I/O to and from block devices comprising the raid5 device md0.
graph_order dev8_0_read dev8_0_write  dev8_4_read dev8_4_write  dev8_1_read dev8_1_write  dev8_3_read dev8_3_write  dev9_1_read dev9_1_write 
> sudo munin-run md_iostat_md1 config
graph_title IOstat for md1
graph_args --base 1024 -l 0
graph_vlabel blocks / ${graph_period} read (-) / written (+)
graph_category disk
graph_info This graph shows the I/O to and from block devices comprising the raid5 device md1.
graph_order dev8_0_read dev8_0_write  dev8_4_read dev8_4_write  dev8_1_read dev8_1_write  dev8_3_read dev8_3_write  dev9_0_read dev9_0_write 

The same devices are reported for both arrays.

@github-actions
Copy link

Stale issue message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

2 participants