-
-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizing a control signal #2905
Comments
@baggepinnen has some things to point to. You wouldn't want to just fit the interpolation since you have a lot of sparsity in time to exploit. |
We don't have much that is open source unfortunately. If you stick an interpolation object in there and optimize over the arrays in the interpolation using single shooting you will not have any sparsity, but you will typically have a difficult optimization problem that may take a long time to optimize and/or converge to a poor local minimum. If the problem is easy enough for this to work, this is the easiest method to implement yourself. If your system is simple enough, it can be automatically converted to JuMP equations relatively easily after which you can use a package like InfiniteOpt.jl to perform a direct-collocation transcription of the problem. This typically gives you very good performance, but requires a number of manual steps that haven't been documented well yet. You can of course implement multiple shooting transcription yourself as well, or modify https://docs.sciml.ai/DiffEqFlux/stable/examples/multiple_shooting/ appropriately. For optimal control, using a penalty to the loss like is done in the tutorial is inadequate, you have to formulate the transcription constraints as hard constraints. |
If you're referring to JuliaSim: I think it could eventually be considered on our side. For now, I was given the OK to try to build a simulator POC in Julia, so it's early days.
Our current optimization problem is almost certainly single-optimum/convex, and I believe it's fairly modest in its current state. I'm going to give it a try. #2646 (comment) looks promising for the ForwardDiff part. Thank you for the detailed information. |
(from https://discourse.julialang.org/t/sciml-optimizing-a-control-signal/117504 and https://discourse.julialang.org/t/solving-ode-parameters-using-experimental-data-with-control-inputs/66614/20; apologies if this isn't using quite the right MTK jargon, I'm new at this)
Suppose I’ve got a differential equation that takes some forcing / control signal as input defined as a
DataInterpolation.LinearInterpolation(u, t)
(similar to the tutorial example). I’d like to find theu
vector which minimizes some loss function. How should I do that with ModelingToolkit?u1
,u2
,u3
as parameters, then callLinearInterpolation([u1,u2,u3], t)
. That doesn’t look very practical, but maybe I can create many parameters in a comprehension?On top of that, we would like to be able to specify a control signal that depends on the value of the dependent variables. I.e. as temperature rises above 80°C, reduce the heat source. I believe this kind of thing can be done with optimal control / model-predictive-control but for our problem we are interested in a reinforcement learning solution.
In theory we could run it as
t=0
tot=1
with parameter HEAT=H0, yielding final temperature T1t=1
tot=2
with parameter HEAT=H1, yielding final temperature T2But I was wondering if there was a nicer interface for this kind of setup. I'd really just like to be able to pass an arbitrary function / functor
f
in theODEProblem
constructor.Related: SciML/ModelingToolkitStandardLibrary.jl#123
The text was updated successfully, but these errors were encountered: