Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on step reward #104

Open
condorman opened this issue Jul 18, 2024 · 0 comments
Open

Question on step reward #104

condorman opened this issue Jul 18, 2024 · 0 comments

Comments

@condorman
Copy link

condorman commented Jul 18, 2024

Hi, I have a doubt about the reward function.
If in a specific state I get the action to close a LONG position, why do I calculate the reward starting from the closing of the next candle instead of the current one in which the action was indicated?

in code: trading_env.py

def step(self, action):
self._truncated = False
self._current_tick += 1

    if self._current_tick == self._end_tick:
        self._truncated = True

    step_reward = self._calculate_reward(action)
    self._total_reward += step_reward

why increase self._current_tick += 1 before reward?
shouldn't I reward him with the closing value of the candle in which this action was decided?

thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant