-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leaks Detected #4391
Comments
Looking at fuse_bridge.c , it seems to indicate that glusterfs/xlators/mount/fuse/src/fuse-bridge.c Line 6156 in 2060330
GF_FREE() afterwards. iobuf_unref() gets called but there is no indicator that it frees the memory:
glusterfs/libglusterfs/src/iobuf.c Lines 627 to 640 in 2060330
I am a user of GlusterFS and not too familiar with the internal implementation so can any core maintainer can check and see if this is right? |
In case of a block is allocated via iobuf_get_from_small the iobuf_arena is NULL and iobuf_put is already calling iobuf_free if iobuf_arena is NULL that is freeing the iobuf so i don;t think it is a leak. static void |
Description of problem:
There is a memory leak reported by Asan after booting the file system and then shutting down. Regardless of whether there are user operations were performed.
The exact command to reproduce the issue:
Boot the file system, wait a few seconds, and then shut down.
The full output of the command that failed:
==405==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 131188 byte(s) in 1 object(s) allocated from:
#0 0x4a046d in malloc (/usr/local/sbin/glusterfs+0x4a046d)
#1 0x7f2fdae47e1e in __gf_malloc /root/glusterfs/libglusterfs/src/mem-pool.c:231:11
#2 0x7f2fdae5362e in iobuf_get_from_small /root/glusterfs/libglusterfs/src/iobuf.c:451:13
#3 0x7f2fdae5362e in iobuf_get2 /root/glusterfs/libglusterfs/src/iobuf.c:482:17
#4 0x7f2fdae543b6 in iobuf_get /root/glusterfs/libglusterfs/src/iobuf.c:556:13
#5 0x7f2fd823d144 in fuse_thread_proc /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6150:17
#6 0x7f2fdaaceea6 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x7ea6)
SUMMARY: AddressSanitizer: 131188 byte(s) leaked in 1 allocation(s).
Expected results:
Should not detect memory leaks.
Mandatory info:
- The output of the
gluster volume info
command:Volume Name: gv0
Type: Distribute
Volume ID: 5eadb70e-c4c6-4af1-a132-79e0257be33f
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 127.0.1.1:/data/brick0/gv0
Brick2: 127.0.1.1:/data/brick1/gv0
Brick3: 127.0.1.1:/data/brick2/gv0
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
- The output of the
gluster volume status
command:Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.0.1.1:/data/brick0/gv0 58236 0 Y 513
Brick 127.0.1.1:/data/brick1/gv0 49982 0 Y 528
Brick 127.0.1.1:/data/brick2/gv0 50168 0 Y 543
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
- The output of the
gluster volume heal
command:Launching heal operation to perform index self heal on volume gv0 has been unsuccessful:
Self-heal-daemon is disabled. Heal will not be triggered on volume gv0
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump
No.
Additional info:
- The operating system / glusterfs version:
Linux kernel version: 6.2.0
OS version: Debian 11.8
GlusterFS version: 11.1
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered: