Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redis Update to 0.24.0 #8

Merged
merged 3 commits into from
Dec 21, 2023
Merged

Conversation

cking
Copy link
Contributor

@cking cking commented Dec 10, 2023

No description provided.

Copy link
Member

@genusistimelord genusistimelord left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good!

@genusistimelord
Copy link
Member

looks like there are some issues with the upgrade within the tests.

@cking
Copy link
Contributor Author

cking commented Dec 11, 2023

I'll check it

@cking
Copy link
Contributor Author

cking commented Dec 11, 2023

Couldnt run the tests locally, it complained about docker. I fixed the compile error though, hoping that it was everything

@cking
Copy link
Contributor Author

cking commented Dec 11, 2023

Nevermind, found the build script, gonna test in a bit

@cking
Copy link
Contributor Author

cking commented Dec 11, 2023

I had the test running for over 2h, I don't know what the problem is for not finishing

@cking
Copy link
Contributor Author

cking commented Dec 11, 2023

I can't get the cluster test running, but the pubsub test (that fails) is "working" (failing with the same error message).

I tried to diagnose it, but I couldnt get it running.
I changed the code like so:

    let monitor = rx.on_message::<String>().next().await.unwrap();
+    println!("{:?}", monitor);

The failed test tells me this output:
"1702316008.952678 [0 172.17.0.1:44944] \"CLIENT\" \"SETINFO\" \"LIB-NAME\" \"redis-rs\""

No matter what I do, I cant get the PING response, I am not clever enough to continue sadly.
I hope you can finish the missing bits. Sorry to waste your time...

@genusistimelord
Copy link
Member

genusistimelord commented Dec 11, 2023

so strange. I ran a minimal viable product on windows to see what would occur and I got

"1702321120.121350 [0 127.0.0.1:60182] "PING" "test""

using

use futures::StreamExt;
use redis_pool::{RedisPool, SingleRedisPool};
use tokio::net::TcpListener;

#[tokio::main]
async fn main() {
    // please consider using dotenvy to get this
    // please check the docker-compose file included for the redis image used here
    let redis_url = "redis://127.0.0.1:6379";

    let client =
        redis::Client::open(redis_url).expect("Error while tryiong to open the redis connection");

    let pool = RedisPool::from(client);

    let mut rx = pool
        .factory()
        .get_async_connection()
        .await
        .unwrap()
        .into_monitor();
    let _ = rx.monitor().await;

    let mut tx = pool.aquire().await.unwrap();
    let _: () = redis::cmd("PING")
        .arg("test")
        .query_async(&mut tx)
        .await
        .unwrap();

    while let Some(string) = rx.on_message::<String>().next().await {
        //let monitor = .unwrap();
        //let monitor = monitor.split(" ").collect::<Vec<_>>();
        println!("{:?}", string)
    }
}

Could you try running this code and seeing if it works. If so then the next thing is, Does the container break this.

@cking
Copy link
Contributor Author

cking commented Dec 11, 2023

 Finished dev [unoptimized + debuginfo] target(s) in 1.06s
 Running `target/debug/redis_pool`
 "1702326653.247902 [0 127.0.0.1:60306] \"CLIENT\" \"SETINFO\" \"LIB-NAME\" \"redis-rs\""
 ^C⏎    

This is my system info, I wonder if this will help.

~/Pr/RedisPool (main|✚ 1) <12s> [130]
criss@astolfo ❯ uname -a
Linux astolfo 6.6.5-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Fri, 08 Dec 2023 17:06:12 +0000 x86_64 GNU/Linux
~/Pr/RedisPool (main|✚ 1)
criss@astolfo ❯ lsb_release -a
LSB Version:    n/a
Distributor ID: Arch
Description:    Arch Linux
Release:        rolling
Codename:       n/a
~/Pr/RedisPool (main|✚ 1)
criss@astolfo ❯ paru -Si redis
Repository      : extra
Name            : redis
Version         : 7.2.3-1
Description     : An in-memory database that persists on disk
Architecture    : x86_64
URL             : https://redis.io/
Licenses        : BSD
Groups          : None
Provides        : None
Depends On      : jemalloc  grep  shadow  systemd-libs
Optional Deps   : None
Conflicts With  : None
Replaces        : None
Download Size   : 1239.60 KiB
Installed Size  : 4124.27 KiB
Packager        : Frederik Schwan <[email protected]>
Build Date      : Wed 01 Nov 2023 02:01:05 PM CET
Validated By    : MD5 Sum  SHA-256 Sum  Signature

(this was run without any docker container at all, but right on the locally installed redis instance, unlike the tests

@genusistimelord
Copy link
Member

I could not find any reason this is not working other than maybe something wrong with the Redis install for linux. As this should not be the libs fault its returning something that makes no utter sense.

@KrisCarr
Copy link

KrisCarr commented Dec 18, 2023

@genusistimelord from what I can tell on Linux it's returning CLIENT first time, then works as expected.
Slight modification of your previous mvp tested against redis 7.2.3 in container:

use futures::StreamExt;
use redis_pool::{RedisPool, SingleRedisPool};
use tokio::net::TcpListener;

#[tokio::main]
async fn main() {
    // please consider using dotenvy to get this
    // please check the docker-compose file included for the redis image used here
    let redis_url = "redis://127.0.0.1:6379";

    let client =
        redis::Client::open(redis_url).expect("Error while tryiong to open the redis connection");

    let pool = RedisPool::from(client);

    let mut rx = pool
        .factory()
        .get_async_connection()
        .await
        .unwrap()
        .into_monitor();
    let _ = rx.monitor().await;

    let mut tx = pool.aquire().await.unwrap();
    let _: () = redis::cmd("PING")
        .arg("test")
        .query_async(&mut tx)
        .await
        .unwrap();

    let mut i = 0;
    while let Some(string) = rx.on_message::<String>().next().await {
        //let monitor = .unwrap();
        //let monitor = monitor.split(" ").collect::<Vec<_>>();
        println!("{:?}", string);
        let _: () = redis::cmd("PING")
            .arg("test")
            .query_async(&mut tx)
            .await
            .unwrap();
        i += 1;

        if i > 3 {
            break;
        }
    }

    println!("end");
}

Outputs:

"1702933429.132341 [0 10.88.0.2:48260] \"CLIENT\" \"SETINFO\" \"LIB-NAME\" \"redis-rs\""
"1702933429.132584 [0 10.88.0.2:48260] \"PING\" \"test\""
"1702933429.132674 [0 10.88.0.2:48260] \"PING\" \"test\""
"1702933429.132762 [0 10.88.0.2:48260] \"PING\" \"test\""
end

Cargo.toml dependencies:

[dependencies]
futures = "0.3.29"
redis = "0.24.0"

"redis_pool" = { git = "https://github.com/cking/RedisPool", branch = "main" }
tokio = { version = "1.35.0", features = ["full"] }

@KrisCarr
Copy link

Looks like it is a redis 7 thing. Running it against a redis:6 container instead works fine.

@KrisCarr
Copy link

not sure if it's this but in redis 7.2 they say to set the protocol version to RESP2 instead of RESP3 in clients before updating to prevent stuff breaking, their example is in Golang:

client := redis.NewClient(&redis.Options{
    Addr:     "<database_endpoint>",
    Protocol: 2, // Pin the protocol version
})

https://docs.redis.com/latest/rs/release-notes/rs-7-2-4-releases/rs-7-2-4-52/#redis-72-breaking-changes

@genusistimelord
Copy link
Member

genusistimelord commented Dec 18, 2023

that could be possible but the issue seems to be in a brand new redis setup it was breaking within the container. But yeah redis RESP3 does seem to be the main issue causing the problem.

@genusistimelord genusistimelord merged commit bb232e7 into AscendingCreations:main Dec 21, 2023
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants