@@ -1807,22 +1807,17 @@ OptimizationFunction{iip}(f, adtype::AbstractADType = NoAD();
1807
1807
1808
1808
## Positional Arguments
1809
1809
1810
- - `f(u,p,args...)`: the function to optimize. `u` are the optimization variables and `p` are parameters used in definition of
1811
- the objective, even if no such parameters are used in the objective it should be an argument in the function. This can also take
1812
- any additional arguments that are relevant to the objective function, for example minibatches used in machine learning,
1813
- take a look at the minibatching tutorial [here](https://docs.sciml.ai/Optimization/stable/tutorials/minibatch/). This should return
1814
- a scalar, the loss value, as the first return output and if any additional outputs are returned, they will be passed to the `callback`
1815
- function described in [Callback Functions](https://docs.sciml.ai/Optimization/stable/API/solve/#Common-Solver-Options-(Solve-Keyword-Arguments)).
1810
+ - `f(u,p)`: the function to optimize. `u` are the optimization variables and `p` are fixed parameters or data used in the objective,
1811
+ even if no such parameters are used in the objective it should be an argument in the function. For minibatching `p` can be used to pass in
1812
+ a minibatch, take a look at the tutorial [here](https://docs.sciml.ai/Optimization/stable/tutorials/minibatch/) to see how to do it.
1813
+ This should return a scalar, the loss value, as the return output.
1816
1814
- `adtype`: see the Defining Optimization Functions via AD section below.
1817
1815
1818
1816
## Keyword Arguments
1819
1817
1820
- - `grad(G,u,p)` or `G=grad(u,p)`: the gradient of `f` with respect to `u`. If `f` takes additional arguments
1821
- then `grad(G,u,p,args...)` or `G=grad(u,p,args...)` should be used.
1822
- - `hess(H,u,p)` or `H=hess(u,p)`: the Hessian of `f` with respect to `u`. If `f` takes additional arguments
1823
- then `hess(H,u,p,args...)` or `H=hess(u,p,args...)` should be used.
1824
- - `hv(Hv,u,v,p)` or `Hv=hv(u,v,p)`: the Hessian-vector product ``(d^2 f / du^2) v``. If `f` takes additional arguments
1825
- then `hv(Hv,u,v,p,args...)` or `Hv=hv(u,v,p, args...)` should be used.
1818
+ - `grad(G,u,p)` or `G=grad(u,p)`: the gradient of `f` with respect to `u`.
1819
+ - `hess(H,u,p)` or `H=hess(u,p)`: the Hessian of `f` with respect to `u`.
1820
+ - `hv(Hv,u,v,p)` or `Hv=hv(u,v,p)`: the Hessian-vector product ``(d^2 f / du^2) v``.
1826
1821
- `cons(res,u,p)` or `res=cons(u,p)` : the constraints function, should mutate the passed `res` array
1827
1822
with value of the `i`th constraint, evaluated at the current values of variables
1828
1823
inside the optimization routine. This takes just the function evaluations
0 commit comments