You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thoughts and ideas regarding implementing variable timesteps (with some flow models running hourly, others relating to plant growth running daily) after a discussion on the subject :
Users should handle any necessary interpolation of their values.
Floating-point approximation issues would need to be taken into account for any non-integer ratio between timesteps.
Variable timesteps affect the outputs structure, that will need to be taken into account. It also potentially sometimes makes parallelisation impossible.
Perhaps the most lightweight interface for the user would be for them to declare a default timestep, and only specify those that differ from that default. Models could indicate a lower and upper threshold for their accepted timestep ranges. This avoids implicit assumptions by PSE while also keeping the user informed of what is possible/recommended, and not having too much extra typing. It might be more difficult on the modeler to determine the thresholds, though.
New situations that occur with variable timesteps are when a model has a soft dependency on one with a different timestep (eg H -> D, D -> H). In the D -> H case, there should be no issues, as variables from D will remain constant until it is run on its slower pace. In the other case, however, there is some extra work required to handle averages/sums/thresholds over a period of time.
The simplest approach that seems like it could deal with this problem would be to insert an intermediate model that handles (for every variable circulating between the two) the storing of values provided by the faster model, and computing the 'final' value for the slower one. Since there might be a need for users to provide multiple such models, it might be worth adding a few functions that handle the most common computations that are likely to occur. Generating such models seems like it could be automated, but would likely be non-trivial if more than one variable or more than one scale need to be taken into account
What remains to be seen is whether this intermediate model approach fully handles all multiscale configurations. It seems like it should be the case... One problem that does come to mind is that interaction with hard dependencies might be quite tricky.
The text was updated successfully, but these errors were encountered:
Samuel-amap
changed the title
Variable timestep considerations :
Variable timestep considerations
Jan 8, 2025
This relates to #88.
Thoughts and ideas regarding implementing variable timesteps (with some flow models running hourly, others relating to plant growth running daily) after a discussion on the subject :
Users should handle any necessary interpolation of their values.
Floating-point approximation issues would need to be taken into account for any non-integer ratio between timesteps.
Variable timesteps affect the outputs structure, that will need to be taken into account. It also potentially sometimes makes parallelisation impossible.
Perhaps the most lightweight interface for the user would be for them to declare a default timestep, and only specify those that differ from that default. Models could indicate a lower and upper threshold for their accepted timestep ranges. This avoids implicit assumptions by PSE while also keeping the user informed of what is possible/recommended, and not having too much extra typing. It might be more difficult on the modeler to determine the thresholds, though.
New situations that occur with variable timesteps are when a model has a soft dependency on one with a different timestep (eg H -> D, D -> H). In the D -> H case, there should be no issues, as variables from D will remain constant until it is run on its slower pace. In the other case, however, there is some extra work required to handle averages/sums/thresholds over a period of time.
The simplest approach that seems like it could deal with this problem would be to insert an intermediate model that handles (for every variable circulating between the two) the storing of values provided by the faster model, and computing the 'final' value for the slower one. Since there might be a need for users to provide multiple such models, it might be worth adding a few functions that handle the most common computations that are likely to occur. Generating such models seems like it could be automated, but would likely be non-trivial if more than one variable or more than one scale need to be taken into account
What remains to be seen is whether this intermediate model approach fully handles all multiscale configurations. It seems like it should be the case... One problem that does come to mind is that interaction with hard dependencies might be quite tricky.
The text was updated successfully, but these errors were encountered: