You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We got the below results of reward after training rl model
And rl+llm model (we used llama instead of gpt 4 as u suggested)
Here it executed until 41 step and finally got the reward when training with llm but it accepted every decision of rl considering it as reasonable always
But the reward reduced from
-187165.88(rl) to -2.29(
rl+llm).
Is the reward we getting is correct?
And also how do we get the mean travel time, mean waiting time and mean speed values?
The text was updated successfully, but these errors were encountered:
The final output of the program is the cumulative reward, so I believe the value of -2.29 is problematic. Regarding the training, the code I provided does not incorporate the large model into the training process. The idea behind this code is to first train the RL model, and then, during usage, attach the LLM. In other words, you need to train the RL model weights first and then combine the trained RL model with the LLM.
The final output of the program is the cumulative reward, so I believe the value of -2.29 is problematic. Regarding the training, the code I provided does not incorporate the large model into the training process. The idea behind this code is to first train the RL model, and then, during usage, attach the LLM. In other words, you need to train the RL model weights first and then combine the trained RL model with the LLM.
Thank you very much for replying,
Since we got 187555 we changed the number of steps from 3e5 to 10e6
I updated the last_vec_normalize.pkl with vec_normalize_10e6model
Then with 10e6 we got -1014 as a reward (with just rl )
And using rl that we trained and incorporating with llama we got reward as -21.5
Now are these values are appropriate(better than before)?
We got the below results of reward after training rl model
And rl+llm model (we used llama instead of gpt 4 as u suggested)
Here it executed until 41 step and finally got the reward when training with llm but it accepted every decision of rl considering it as reasonable always
But the reward reduced from
-187165.88(rl) to -2.29(
rl+llm).
Is the reward we getting is correct?
And also how do we get the mean travel time, mean waiting time and mean speed values?
The text was updated successfully, but these errors were encountered: