Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] KeyDB never releases space from the Flash Storage #870

Open
epaolillo opened this issue Oct 2, 2024 · 7 comments
Open

[BUG] KeyDB never releases space from the Flash Storage #870

epaolillo opened this issue Oct 2, 2024 · 7 comments

Comments

@epaolillo
Copy link

Describe the bug

KeyDB never releases space from the Flash storage. Despite manually deleting keys, the amount of used storage does not significantly decrease.

To reproduce

  1. Run KeyDB with Flash storage enabled.
  2. Insert a large number of keys into the database.
  3. Manually delete a portion of the keys.
  4. Check the storage usage; the space occupied on the disk does not decrease as expected.

Expected behavior

After deleting keys, the amount of storage space used by KeyDB should decrease significantly, reflecting the removal of the data.

Additional information

The storage engine is configured to use Flash.
I’ve ensured that the keys were deleted correctly and are no longer accessible.
However, the storage space used does not change, even after multiple manual deletions.
Version of KeyDB being used: 6.3.4
OS: Ubuntu 20.04

@keithchew
Copy link

Hi @epaolillo

Under the hood, KeyDB uses rocksdb, and I believe space will be freed when rocksdb does a compaction. As to when compaction is triggered, you might want to refer to their documentation. On a side note, I have been stress testing KeyDB with extensive read and writes (running for months 24/7), and I can confirm with "du -sh " the space used is close to the dump.rdb (output of bgsave).

4.4G    /var/data-kdb-dev/dump.rdb
5.9G    /var/data-kdb-dev/flash

However, this does not prove that there isn't a problem, as the space used by flash is more than the rdb.

@epaolillo
Copy link
Author

Thank you for the response. I had running keyDB during a month, and every key stored in the SST files are never compacted.
I guess that may there some command like COMPACTDB in order to force manually the compact but I no found nothing.
Aditionally to this, I cant found in the complete code where the Compact is called. During this month obviously the storage never down, and I can reach every KEY that shouldn't be founded, removed manually and by TTL.

Some aditional data that I can provide just ask me please, would great if I can contribute in some way

@keithchew
Copy link

keithchew commented Oct 3, 2024

Oh do you mean you did this:

set test 1
get test
del test
get test

And the 2nd get above returns a result?

If not, can you describe how to reach the KEY and how did you remove it manually?

@epaolillo
Copy link
Author

epaolillo commented Oct 3, 2024

No, its no there. BUT if I scan manually the SST files, I can found the key there. Its very strange. I will attach here the test

Selección_083

This is a: cat 072636.log | grep -a TEST

Selección_084

  1. The first time that I set "TEST"
  2. Once time deleted still there and never is removed/compacted (I have 1 month of keys by 192 GB of storage lol)

Thanks!

@keithchew
Copy link

keithchew commented Oct 3, 2024

which version of KeyDB are you using? I am running KeyDB creating and deleting 500 - 1000 keys per second, 24/7 and have not seen this behaviour.

The only bug I found related to rocksdb is here:
#754

But I doubt that would be the cause of your issue. In my flash folder, I only have non empty log files for the current day, all older log files have the pattern and have 0 size:

LOG.old.<timestamp>

It could be an something strange going on in your environment.

@epaolillo
Copy link
Author

keydb-server --version
KeyDB server v=6.3.4 sha=00000000:0 malloc=jemalloc-5.2.1 bits=64 build=b532cd0401cb0da4

Im not sure how I have that cpp file given that I installed keydb using apt. How I can check it?
Let me give you a info stack:

Server
redis_version:6.3.4
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:b532cd0401cb0da4
redis_mode:standalone
os:Linux 5.15.0-122-generic x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:11.2.0
process_id:2107
process_supervised:no
run_id:de692956681b9b4a65a82bb235e309b66883da33
tcp_port:6379
server_time_usec:1727965219628114
uptime_in_seconds:47462
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:16689187
executable:/usr/bin/keydb-server
config_file:/etc/keydb/keydb.conf
availability_zone:
features:cluster_mget

Clients
connected_clients:2
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:32
client_recent_max_output_buffer:0
blocked_clients:0
tracking_clients:0
clients_in_timeout_table:0
current_client_thread:0
thread_0_clients:2
thread_1_clients:0

Memory
used_memory:175833384
used_memory_human:167.69M
used_memory_rss:787574784
used_memory_rss_human:751.09M
used_memory_peak:263655376
used_memory_peak_human:251.44M
used_memory_peak_perc:66.69%
used_memory_overhead:260442792
used_memory_startup:125684344
used_memory_dataset:18446744073624942208
used_memory_dataset_perc:36783844753408.00%
allocator_allocated:178827584
allocator_active:183967744
allocator_resident:196800512
total_system_memory:16765034496
total_system_memory_human:15.61G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:12000000000
maxmemory_human:11.18G
maxmemory_policy:allkeys-lfu
allocator_frag_ratio:1.03
allocator_frag_bytes:5140160
allocator_rss_ratio:1.07
allocator_rss_bytes:12832768
rss_overhead_ratio:4.00
rss_overhead_bytes:590774272
mem_fragmentation_ratio:4.48
mem_fragmentation_bytes:611784160
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:41008
mem_aof_buffer:0
mem_allocator:jemalloc-5.2.1
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0
storage_provider:flash
available_system_memory:unavailable

Persistence
loading:0
current_cow_size:29929472
current_cow_size_age:47282
current_fork_perc:92.42
current_save_keys_processed:2574337
current_save_keys_total:2785369
rdb_changes_since_last_save:18166248
rdb_bgsave_in_progress:1
rdb_last_save_time:1727917757
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:47290
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
module_fork_in_progress:0
module_fork_last_cow_size:0
storage_flash_used_bytes:323087728640
storage_flash_total_bytes:532429406208
storage_flash_rocksdb_bytes:208856020917

Stats
total_connections_received:14
total_commands_processed:32762401
instantaneous_ops_per_sec:0
total_net_input_bytes:4254178835
total_net_output_bytes:155382130
instantaneous_input_kbps:0.05
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:113
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:104120
evicted_keys:0
keyspace_hits:2894859
keyspace_misses:3561638
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:5429
total_forks:1
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:19
dump_payload_sanitizations:0
total_reads_processed:429787
total_writes_processed:429772
instantaneous_lock_contention:1
avg_lock_contention:0.218750
storage_provider_read_hits:353
storage_provider_read_misses:27138162

Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:66572dd294e66c8568b96f807f4797402da1dc29
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

CPU
used_cpu_sys:146.177007
used_cpu_user:601.317027
used_cpu_sys_children:0.000000
used_cpu_user_children:0.000000
server_threads:2
long_lock_waits:3
used_cpu_sys_main_thread:76.484849
used_cpu_user_main_thread:446.663180

Modules
Errorstats
errorstat_ERR:count=14
errorstat_NOAUTH:count=4
errorstat_WRONGTYPE:count=1

Cluster
cluster_enabled:0

Keyspace
db0:keys=2801152,expires=16106,avg_ttl=2591778755,cached_keys=51464

KeyDB
mvcc_depth:0

@keithchew
Copy link

Interesting, you have 2.8M keys and flash storage is 208GB... Maybe the logs might give some clues...?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants