You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am confused about the 'value function' in the instructGPT paper. In the paper, it said "As previously mentioned, for all PPO models we use a 6B RM and a 6B value function, and the latter is initialized from the former.". The reward model(RM) and value function model seem to be two seperate models. However, there are no evidences showing that value function is part of involvement of PPO RL training either in the objective function or in the other parts of the paper.
Thanks
The text was updated successfully, but these errors were encountered:
Hi,
I am confused about the 'value function' in the instructGPT paper. In the paper, it said "As previously mentioned, for all PPO models we use a 6B RM and a 6B value function, and the latter is initialized from the former.". The reward model(RM) and value function model seem to be two seperate models. However, there are no evidences showing that value function is part of involvement of PPO RL training either in the objective function or in the other parts of the paper.
Thanks
The text was updated successfully, but these errors were encountered: