Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]bio lazy free object crash in unstable branch. #1399

Open
wuranxx opened this issue Dec 6, 2024 · 1 comment
Open

[BUG]bio lazy free object crash in unstable branch. #1399

wuranxx opened this issue Dec 6, 2024 · 1 comment

Comments

@wuranxx
Copy link

wuranxx commented Dec 6, 2024

Describe the bug

bio lazy free object crash.

To reproduce

I am developing #1133 , PR: #1384

Before I rebased to the latest unstable (old commit is 32f7541) , the Tcl tests and manual command tests ran fine locally. (This branch https://github.com/wuranxx/valkey/tree/cluster-flush-slot-old)

After I rebased to the latest unstable 6df376d, the command failed when executed locally. (This branch:https://github.com/wuranxx/valkey/tree/cluster-flush-slot)

Build the valkey-server and create a cluster.
execute cluster-flush-slot tcl test, the old branch will sucess and new branch will failed.

make -j4 all-with-unit-tests SERVER_CFLAGS='-Werror' BUILD_TLS=yes
./runtest --single unit/cluster/cluster-flush-slot

I am analyzing the issue, but I am not very familiar with the bio component. Therefore, I am reporting this bug in the hope that a developer with more experience in this area can help confirm the issue.

Expected behavior

flushslot command will success.

Additional information

the run result:

Cleanup: may take some time... OK
Starting test server at port 21079
[ready]: 3396191
Testing unit/cluster/cluster-flush-slot
[ready]: 3396195
[ready]: 3396197
[ready]: 3396192
[ready]: 3396193
[ready]: 3396194
[ready]: 3396196
[ready]: 3396198
[ready]: 3396199
[ready]: 3396200
[ready]: 3396201
[ready]: 3396202
[ready]: 3396205
[ready]: 3396206
[ready]: 3396203
[ready]: 3396204

Logged crash report (pid 3396273):
=== VALKEY BUG REPORT START: Cut & paste starting from here ===
3396273:M 06 Dec 2024 17:56:38.178 # === ASSERTION FAILED ===
3396273:M 06 Dec 2024 17:56:38.178 # ==> cluster_legacy.c:6189 'server.execution_nesting == 0' is not true

------ STACK TRACE ------

3396277 bio_lazy_free
/usr/lib64/libpthread.so.0(pthread_cond_wait+0x1fc)[0x7fadc4306a3c]
src/valkey-server 127.0.0.1:21114 [cluster](bioProcessBackgroundJobs+0x146)[0x4c5e46]
/usr/lib64/libpthread.so.0(+0x8f3b)[0x7fadc4300f3b]
/usr/lib64/libc.so.6(clone+0x40)[0x7fadc4236980]

3396275 bio_close_file
/usr/lib64/libpthread.so.0(pthread_cond_wait+0x1fc)[0x7fadc4306a3c]
src/valkey-server 127.0.0.1:21114 [cluster](bioProcessBackgroundJobs+0x146)[0x4c5e46]
/usr/lib64/libpthread.so.0(+0x8f3b)[0x7fadc4300f3b]
/usr/lib64/libc.so.6(clone+0x40)[0x7fadc4236980]

3396276 bio_aof
/usr/lib64/libpthread.so.0(pthread_cond_wait+0x1fc)[0x7fadc4306a3c]
src/valkey-server 127.0.0.1:21114 [cluster](bioProcessBackgroundJobs+0x146)[0x4c5e46]
/usr/lib64/libpthread.so.0(+0x8f3b)[0x7fadc4300f3b]
/usr/lib64/libc.so.6(clone+0x40)[0x7fadc4236980]

3396273 valkey-server *
src/valkey-server 127.0.0.1:21114 [cluster](delKeysInSlot+0x16b)[0x53d0ab]
src/valkey-server 127.0.0.1:21114 [cluster](clusterCommandFlushslot+0x91)[0x53d1a1]
src/valkey-server 127.0.0.1:21114 [cluster](clusterCommandSpecial+0x53f)[0x5468ef]
src/valkey-server 127.0.0.1:21114 [cluster](clusterCommand+0x189)[0x46d369]
src/valkey-server 127.0.0.1:21114 [cluster](call+0x64c)[0x45f87c]
src/valkey-server 127.0.0.1:21114 [cluster](processCommand+0x7b9)[0x460049]
src/valkey-server 127.0.0.1:21114 [cluster](processCommandAndResetClient+0x1d)[0x508a5d]
src/valkey-server 127.0.0.1:21114 [cluster](processInputBuffer+0x13d)[0x50e34d]
src/valkey-server 127.0.0.1:21114 [cluster](readQueryFromClient+0x47)[0x511937]
src/valkey-server 127.0.0.1:21114 [cluster][0x53245b]
src/valkey-server 127.0.0.1:21114 [cluster](aeProcessEvents+0xe5)[0x5269b5]
src/valkey-server 127.0.0.1:21114 [cluster](aeMain+0x2d)[0x526cdd]
src/valkey-server 127.0.0.1:21114 [cluster](main+0x386)[0x454836]
/usr/lib64/libc.so.6(__libc_start_main+0xe7)[0x7fadc4163c67]
src/valkey-server 127.0.0.1:21114 [cluster](_start+0x2a)[0x45536a]

4/4 expected stacktraces.

------ STACK TRACE DONE ------

------ INFO OUTPUT ------
# Server
redis_version:7.2.4
server_name:valkey
valkey_version:255.255.255
redis_git_sha1:ab74efba
redis_git_dirty:0
redis_build_id:b8ae56f908cd6bb0
server_mode:cluster
os:Linux 4.18.0-147.5.1.6.h1152.eulerosv2r9.x86_64 x86_64
arch_bits:64
monotonic_clock:POSIX clock_gettime
multiplexing_api:epoll
gcc_version:7.3.0
process_id:3396273
process_supervised:no
run_id:c41b278e65cf3ab9051e5b01228723a2b1ce2dfa
tcp_port:21114
server_time_usec:1733478998178078
uptime_in_seconds:11
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:5425750
executable:/opt/wuran/valkey-github/src/valkey-server
config_file:/opt/wuran/valkey-github/./tests/tmp/valkey.conf.3396191.8
io_threads_active:0
availability_zone:
listener0:name=tcp,bind=127.0.0.1,port=21114
listener1:name=unix,bind=/opt/wuran/valkey-github/tests/tmp/server.3396191.7/socket

# Clients
connected_clients:1
cluster_connections:6
maxclients:10000
client_recent_max_input_buffer:20480
client_recent_max_output_buffer:21344
blocked_clients:0
tracking_clients:0
pubsub_clients:0
watching_clients:0
clients_in_timeout_table:0
total_watched_keys:0
total_blocking_keys:0
total_blocking_keys_on_nokey:0

# Memory
used_memory:3189064
used_memory_human:3.04M
used_memory_rss:16052224
used_memory_rss_human:15.31M
used_memory_peak:3189064
used_memory_peak_human:3.04M
used_memory_peak_perc:100.07%
used_memory_overhead:2758268
used_memory_startup:2708720
used_memory_dataset:430796
used_memory_dataset_perc:89.68%
allocator_allocated:3172432
allocator_active:3461120
allocator_resident:9367552
allocator_muzzy:0
total_system_memory:16260825088
total_system_memory_human:15.14G
used_memory_lua:31744
used_memory_vm_eval:31744
used_memory_lua_human:31.00K
used_memory_scripts_eval:0
number_of_cached_scripts:0
number_of_functions:0
number_of_libraries:0
used_memory_vm_functions:33792
used_memory_vm_total:65536
used_memory_vm_total_human:64.00K
used_memory_functions:184
used_memory_scripts:184
used_memory_scripts_human:184B
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.09
allocator_frag_bytes:288688
allocator_rss_ratio:2.71
allocator_rss_bytes:5906432
rss_overhead_ratio:1.71
rss_overhead_bytes:6684672
mem_fragmentation_ratio:5.36
mem_fragmentation_bytes:13057296
mem_not_counted_for_evict:0
mem_replication_backlog:41012
mem_total_replication_buffers:41008
mem_clients_slaves:0
mem_clients_normal:1920
mem_cluster_links:6432
mem_aof_buffer:0
mem_allocator:jemalloc-5.3.0
mem_overhead_db_hashtable_rehashing:0
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0

# Persistence
loading:0
async_loading:0
current_cow_peak:0
current_cow_size:0
current_cow_size_age:0
current_fork_perc:0.00
current_save_keys_processed:0
current_save_keys_total:0
rdb_changes_since_last_save:2000
rdb_bgsave_in_progress:0
rdb_last_save_time:1733478987
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
rdb_saves:0
rdb_last_cow_size:4489216
rdb_last_load_keys_expired:0
rdb_last_load_keys_loaded:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_rewrites:0
aof_rewrites_consecutive_failures:0
aof_last_write_status:ok
aof_last_cow_size:0
module_fork_in_progress:0
module_fork_last_cow_size:0

# Stats
total_connections_received:3
total_commands_processed:1236
instantaneous_ops_per_sec:20
total_net_input_bytes:45047
total_net_output_bytes:190947
total_net_repl_input_bytes:0
total_net_repl_output_bytes:265
instantaneous_input_kbps:0.62
instantaneous_output_kbps:17.78
instantaneous_input_repl_kbps:0.00
instantaneous_output_repl_kbps:0.00
rejected_connections:0
sync_full:1
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:0
evicted_keys:0
evicted_clients:0
evicted_scripts:0
total_eviction_exceeded_time:0
current_eviction_exceeded_time:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
pubsubshard_channels:0
latest_fork_usec:948
total_forks:1
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
total_active_defrag_time:0
current_active_defrag_time:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:0
dump_payload_sanitizations:0
total_reads_processed:1237
total_writes_processed:2224
io_threaded_reads_processed:0
io_threaded_writes_processed:0
io_threaded_freed_objects:0
io_threaded_poll_processed:0
io_threaded_total_prefetch_batches:0
io_threaded_total_prefetch_entries:0
client_query_buffer_limit_disconnections:0
client_output_buffer_limit_disconnections:0
reply_buffer_shrinks:2
reply_buffer_expands:0
eventloop_cycles:1962
eventloop_duration_sum:73420
eventloop_duration_cmd_sum:12416
instantaneous_eventloop_cycles_per_sec:91
instantaneous_eventloop_duration_usec:50
acl_access_denied_auth:0
acl_access_denied_cmd:0
acl_access_denied_key:0
acl_access_denied_channel:0

# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=21112,state=online,offset=0,lag=1,type=replica
replicas_waiting_psync:0
master_failover_state:no-failover
master_replid:080a61a22538c602eb5d5122515242cfc4c7624d
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:37927
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:10485760
repl_backlog_first_byte_offset:1
repl_backlog_histlen:37927

# CPU
used_cpu_sys:0.042463
used_cpu_user:0.037358
used_cpu_sys_children:0.002493
used_cpu_user_children:0.000000
used_cpu_sys_main_thread:0.042254
used_cpu_user_main_thread:0.037174

# Modules

# Commandstats
cmdstat_set:calls=1000,usec=1993,usec_per_call=1.99,rejected_calls=0,failed_calls=0
cmdstat_debug:calls=1,usec=9,usec_per_call=9.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|addslotsrange:calls=1,usec=124,usec_per_call=124.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|countkeysinslot:calls=2,usec=4,usec_per_call=2.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|meet:calls=3,usec=97,usec_per_call=32.33,rejected_calls=0,failed_calls=0
cmdstat_cluster|shards:calls=200,usec=7979,usec_per_call=39.90,rejected_calls=0,failed_calls=0
cmdstat_cluster|slots:calls=9,usec=296,usec_per_call=32.89,rejected_calls=0,failed_calls=0
cmdstat_cluster|keyslot:calls=1,usec=1,usec_per_call=1.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|myid:calls=1,usec=2,usec_per_call=2.00,rejected_calls=0,failed_calls=0
cmdstat_cluster|info:calls=1,usec=42,usec_per_call=42.00,rejected_calls=0,failed_calls=0
cmdstat_replconf:calls=14,usec=65,usec_per_call=4.64,rejected_calls=0,failed_calls=0
cmdstat_ping:calls=2,usec=3,usec_per_call=1.50,rejected_calls=0,failed_calls=0
cmdstat_psync:calls=1,usec=1801,usec_per_call=1801.00,rejected_calls=0,failed_calls=0

# Errorstats

# Latencystats
latency_percentiles_usec_set:p50=2.007,p99=6.015,p99.9=10.047
latency_percentiles_usec_debug:p50=9.023,p99=9.023,p99.9=9.023
latency_percentiles_usec_cluster|addslotsrange:p50=124.415,p99=124.415,p99.9=124.415
latency_percentiles_usec_cluster|countkeysinslot:p50=2.007,p99=2.007,p99.9=2.007
latency_percentiles_usec_cluster|meet:p50=34.047,p99=41.215,p99.9=41.215
latency_percentiles_usec_cluster|shards:p50=39.167,p99=58.111,p99.9=61.183
latency_percentiles_usec_cluster|slots:p50=33.023,p99=40.191,p99.9=40.191
latency_percentiles_usec_cluster|keyslot:p50=1.003,p99=1.003,p99.9=1.003
latency_percentiles_usec_cluster|myid:p50=2.007,p99=2.007,p99.9=2.007
latency_percentiles_usec_cluster|info:p50=42.239,p99=42.239,p99.9=42.239
latency_percentiles_usec_replconf:p50=1.003,p99=47.103,p99.9=47.103
latency_percentiles_usec_ping:p50=1.003,p99=2.007,p99.9=2.007
latency_percentiles_usec_psync:p50=1802.239,p99=1802.239,p99.9=1802.239

# Cluster
cluster_enabled:1

# Keyspace

# Cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:4
cluster_size:2
cluster_current_epoch:3
cluster_my_epoch:1
cluster_stats_messages_ping_sent:188
cluster_stats_messages_pong_sent:218
cluster_stats_messages_meet_sent:3
cluster_stats_messages_sent:409
cluster_stats_messages_ping_received:218
cluster_stats_messages_pong_received:193
cluster_stats_messages_received:411
total_cluster_links_buffer_limit_exceeded:0

------ CLUSTER NODES OUTPUT ------
42158f2ff99409791e34cc9251fafadcf200031c 127.0.0.1:21112@31112,,tls-port=0,shard-id=d48153a14f774ccb404d839865563ecb879b9990 slave ccd26e234906487040775fbce52c24c33d5d5258 0 1733478998081 1 connected
dab09a5dca2665fa8faf9cb1faf2d8bc1feb6dd3 127.0.0.1:21113@31113,,tls-port=0,shard-id=4313fb70d9c1e76aaa485f865e2eb55397da4cab master - 0 1733478998081 0 connected 8192-16383
ccd26e234906487040775fbce52c24c33d5d5258 127.0.0.1:21114@31114,,tls-port=0,shard-id=d48153a14f774ccb404d839865563ecb879b9990 myself,master - 0 0 1 connected 0-8191
17890bef2c393d8c152ed0695830292002869fb9 127.0.0.1:21111@31111,,tls-port=0,shard-id=4313fb70d9c1e76aaa485f865e2eb55397da4cab slave dab09a5dca2665fa8faf9cb1faf2d8bc1feb6dd3 0 1733478998081 0 connected

------ CLIENT LIST OUTPUT ------
id=4 addr=127.0.0.1:40287 laddr=127.0.0.1:21114 fd=12 name= age=10 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=24 multi-mem=0 rbs=1024 rbp=520 obl=0 oll=0 omem=0 tot-mem=1976 events=r cmd=cluster|flushslot user=default redir=-1 resp=2 lib-name= lib-ver= tot-net-in=44464 tot-net-out=190653 tot-cmds=1219
id=11 addr=127.0.0.1:44852 laddr=127.0.0.1:21114 fd=24 name= age=10 idle=0 flags=S db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=20474 argv-mem=0 multi-mem=0 rbs=1024 rbp=0 obl=0 oll=1 omem=20504 tot-mem=42904 events=r cmd=replconf user=default redir=-1 resp=2 lib-name= lib-ver= tot-net-in=576 tot-net-out=37949 tot-cmds=16

------ CURRENT CLIENT INFO ------
id=4 addr=127.0.0.1:40287 laddr=127.0.0.1:21114 fd=12 name= age=10 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=24 multi-mem=0 rbs=1024 rbp=520 obl=0 oll=0 omem=0 tot-mem=1976 events=r cmd=cluster|flushslot user=default redir=-1 resp=2 lib-name= lib-ver= tot-net-in=44464 tot-net-out=190653 tot-cmds=1219
argc: '4'
argv[0]: '"CLUSTER"'
argv[1]: '"FLUSHSLOT"'
argv[2]: '"8141"'
argv[3]: '"SYNC"'

------ EXECUTING CLIENT INFO ------
id=4 addr=127.0.0.1:40287 laddr=127.0.0.1:21114 fd=12 name= age=10 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=24 multi-mem=0 rbs=1024 rbp=520 obl=0 oll=0 omem=0 tot-mem=1976 events=r cmd=cluster|flushslot user=default redir=-1 resp=2 lib-name= lib-ver= tot-net-in=44464 tot-net-out=190653 tot-cmds=1219
argc: '4'
argv[0]: '"CLUSTER"'
argv[1]: '"FLUSHSLOT"'
argv[2]: '"8141"'
argv[3]: '"SYNC"'

------ MODULES INFO OUTPUT ------

------ CONFIG DEBUG OUTPUT ------
lazyfree-lazy-user-del yes
repl-diskless-sync yes
dual-channel-replication-enabled no
repl-diskless-load disabled
activedefrag no
lazyfree-lazy-eviction yes
client-query-buffer-limit 1gb
lazyfree-lazy-user-flush yes
lazyfree-lazy-server-del yes
sanitize-dump-payload no
list-compress-depth 0
slave-read-only yes
debug-context ""
lazyfree-lazy-expire yes
io-threads 1
replica-read-only yes
proto-max-bulk-len 512mb

------ FAST MEMORY TEST ------
3396273:M 06 Dec 2024 17:56:38.181 # Bio worker thread #0 terminated
3396273:M 06 Dec 2024 17:56:38.182 # Bio worker thread #1 terminated
3396273:M 06 Dec 2024 17:56:38.182 # Bio worker thread #2 terminated
*** Preparing to test memory region 6ec000 (2355200 bytes)
*** Preparing to test memory region 2304000 (266240 bytes)
*** Preparing to test memory region 7fadb41fd000 (8388608 bytes)
*** Preparing to test memory region 7fadb49fe000 (8388608 bytes)
*** Preparing to test memory region 7fadb51ff000 (8388608 bytes)
*** Preparing to test memory region 7fadb5a00000 (8388608 bytes)
*** Preparing to test memory region 7fadb6200000 (6291456 bytes)
*** Preparing to test memory region 7fadb684f000 (2621440 bytes)
*** Preparing to test memory region 7fadc3800000 (8388608 bytes)
*** Preparing to test memory region 7fadc4120000 (16384 bytes)
*** Preparing to test memory region 7fadc42f2000 (24576 bytes)
*** Preparing to test memory region 7fadc4315000 (16384 bytes)
*** Preparing to test memory region 7fadc45f8000 (16384 bytes)
*** Preparing to test memory region 7fadc468e000 (4096 bytes)
*** Preparing to test memory region 7fadc4822000 (8192 bytes)
*** Preparing to test memory region 7fadc485b000 (4096 bytes)
.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O
Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.

=== VALKEY BUG REPORT END. Make sure to include from START to END. ===

       Please report the crash by opening an issue on github:

           https://github.com/valkey-io/valkey/issues

  If a module was involved, please open in the module's repo instead.

  Suspect RAM error? Use valkey-server --test-memory to verify it.

  Some other issues could be detected by valkey-server --check-system

[exception]: Executing test client: I/O error reading reply.
I/O error reading reply
    while executing
"[Rn $n] {*}$args"
    (procedure "R" line 2)
    invoked from within
"R 0 CLUSTER FLUSHSLOT $key_slot SYNC"
    ("uplevel" body line 13)
    invoked from within
"uplevel 1 $code"
    (procedure "test" line 58)
    invoked from within
"test "SYNC Flush slot command" {
        set key_slot [R 0 CLUSTER KEYSLOT FC]
        set slot_keys_num [R 0 CLUSTER COUNTKEYSINSLOT $key_slot]

    ..."
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 $code"
    (procedure "cluster_setup" line 35)
    invoked from within
"cluster_setup 2 2 4 continuous_slot_allocation default_replica_allocation {
    test "SYNC Flush slot command" {
        set key_slot [R 0 CLUSTER KEY..."
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 $code "
    (procedure "start_server" line 2)
    invoked from within
"start_server {overrides {cluster-enabled yes cluster-ping-interval 100 cluster-node-timeout 3000} tags {external:skip cluster}} {cluster_setup 2 2 4 c..."
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 $code "
    (procedure "start_server" line 2)
    invoked from within
"start_server {overrides {cluster-enabled yes cluster-ping-interval 100 cluster-node-timeout 3000} tags {external:skip cluster}} {start_server {overrid..."
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 $code "
    (procedure "start_server" line 2)
    invoked from within
"start_server {overrides {cluster-enabled yes cluster-ping-interval 100 cluster-node-timeout 3000} tags {external:skip cluster}} {start_server {overrid..."
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 $code "
    (procedure "start_server" line 2)
    invoked from within
"start_server {overrides {cluster-enabled yes cluster-ping-interval 100 cluster-node-timeout 3000} tags {external:skip cluster}} {start_server {overrid..."
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 $code"
    (procedure "start_multiple_servers" line 5)
    invoked from within
"start_multiple_servers $node_count $options $code"
    (procedure "start_cluster" line 16)
    invoked from within
"start_cluster 2 2 {tags {external:skip cluster}} {
    test "SYNC Flush slot command" {
        set key_slot [R 0 CLUSTER KEYSLOT FC]
        set slot..."
    (file "tests/unit/cluster/cluster-flush-slot.tcl" line 1)
    invoked from within
"source $path"
    (procedure "execute_test_file" line 4)
    invoked from within
"execute_test_file $data"
    (procedure "test_client_main" line 10)
    invoked from within
"test_client_main $::test_server_port "
@zuiderkwast
Copy link
Contributor

The crash is not in bio lazy free object. That's just the stack trace of another thread.

The crash is the assertion cluster_legacy.c:6189 'server.execution_nesting == 0' is not true, in delKeysInSlot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants