Skip to content

Docs for MOO #823

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 25 commits into
base: master
Choose a base branch
from
Open

Docs for MOO #823

wants to merge 25 commits into from

Conversation

ParasPuneetSingh
Copy link
Collaborator

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

Added documentation for MOO in BBO
MOO docs update.
MOO docs update.
@Vaibhavdixit02
Copy link
Member

https://github.com/SciML/Optimization.jl/actions/runs/10977080544/job/30478629355?pr=823 documentation failure seems real, can you take a look?

updated project.toml for the docs.
Added compat for BBO.
end
mof = MultiObjectiveOptimizationFunction(multi_obj_func_2)
prob = Optimization.OptimizationProblem(mof, u0; lb = [0.0, 0.0], ub = [2.0, 2.0])
sol = solve(prob_2, opt, NumDimensions=2, FitnessScheme=ParetoFitnessScheme{2}(is_minimizing=true))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
sol = solve(prob_2, opt, NumDimensions=2, FitnessScheme=ParetoFitnessScheme{2}(is_minimizing=true))
sol = solve(prob, opt, NumDimensions=2, FitnessScheme=ParetoFitnessScheme{2}(is_minimizing=true))

return [f1, f2]
end
initial_guess = [1.0, 1.0]
function gradient_multi_objective(x, p=nothing)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this isn't used please remove it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used in the MultiObjectiveOptimizationFunction call below and is passed as the jac arg.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But Evolutionary doesn't need derivatives right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah just checked the testing once again and it is not needed, so will remove it.

npartitions = 100

# reference points (Das and Dennis's method)
weights = gen_ref_dirs(nobjectives, npartitions)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is missing here

@Vaibhavdixit02
Copy link
Member

@ParasPuneetSingh please take a look at the review above

Added required packages for MOO docs.
added required packages for MOO
Corrected function names for MOO docs.
Removed unnecessary FowardDiff function.
@Vaibhavdixit02
Copy link
Member

Can you rebase on master there was an incorrect compat there that made the docs environment fail here I have updated it

Added the package for the algorithms.
@@ -67,3 +67,21 @@ prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0,
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(), maxiters = 100000,
maxtime = 1000.0)
```

## Multi-objective optimization
The optimizer for Multi-Objective Optimization is `BBO_borg_moea()`. Your objective function should return a tuple of the objective values and you should indicate the fitness scheme to be (typically) Pareto fitness and specify the number of objectives. Otherwise, the use is similar, here is an example:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a tuple? That is not going to scale well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah corrected that mistake to vector, the struct uses a vector of objective functions.

end
mof = MultiObjectiveOptimizationFunction(multi_obj_func)
prob = Optimization.OptimizationProblem(mof, u0; lb = [0.0, 0.0], ub = [2.0, 2.0])
sol = solve(prob, opt, NumDimensions=2, FitnessScheme=ParetoFitnessScheme{2}(is_minimizing=true))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NumDimensions, FitnessScheme, those don't match Julia style.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those should be in the opt.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NumDimensions should be computed from the MultiObjectiveOptimizationFunction, FitnessScheme should be in the opt.

function func(x, p=nothing)::Vector{Float64}
f1 = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2 # Rosenbrock function
f2 = -20.0 * exp(-0.2 * sqrt(0.5 * (x[1]^2 + x[2]^2))) - exp(0.5 * (cos(2π * x[1]) + cos(2π * x[2]))) + exp(1) + 20.0 # Ackley function
return [f1, f2]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's an array now? Is there an in-place form?

# In this example, we have no constraints
gx = [0.0] # Inequality constraints (not used)
hx = [0.0] # Equality constraints (not used)
return [f1, f2], gx, hx
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now a third API?

Added evolutionary to the package.
updated algorithm call.
Correction of changeing tuple to vector.
function multi_obj_func(x, p)
f1 = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2 # Rosenbrock function
f2 = -20.0 * exp(-0.2 * sqrt(0.5 * (x[1]^2 + x[2]^2))) - exp(0.5 * (cos(2π * x[1]) + cos(2π * x[2]))) + exp(1) + 20.0 # Ackley function
return (f1, f2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you didn't actually change it though?

corrected algorithm calls.
@Vaibhavdixit02
Copy link
Member

@ParasPuneetSingh please finish this up.

Adding argument mapping for num_dimensions and fitness_scheme.
syntax change for num_dimensions and fitness_scheme passing in solve().
@ParasPuneetSingh
Copy link
Collaborator Author

@ChrisRackauckas please review this at the earliest.

f2 = -20.0 * exp(-0.2 * sqrt(0.5 * (x[1]^2 + x[2]^2))) - exp(0.5 * (cos(2π * x[1]) + cos(2π * x[2]))) + exp(1) + 20.0 # Ackley function
return (f1, f2)
end
mof = MultiObjectiveOptimizationFunction(multi_obj_func)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
mof = MultiObjectiveOptimizationFunction(multi_obj_func)
mof = MultiObjectiveOptimizationFunction(multi_obj_func, obj_prototype = zeros(2))

Comment on lines +79 to +82
function multi_obj_func(x, p)
f1 = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2 # Rosenbrock function
f2 = -20.0 * exp(-0.2 * sqrt(0.5 * (x[1]^2 + x[2]^2))) - exp(0.5 * (cos(2π * x[1]) + cos(2π * x[2]))) + exp(1) + 20.0 # Ackley function
return (f1, f2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Show in-place form too?

@ChrisRackauckas
Copy link
Member

This is not ready. There's a few steps here. First of all, we document the OptimizationFunction interface https://github.com/SciML/SciMLBase.jl/blob/master/src/scimlfunctions.jl#L1913-L2030 . Right now, MultiObjectiveOptimizationFunction's interface is not documented. Let's start there.

https://github.com/SciML/SciMLBase.jl/blob/master/src/scimlfunctions.jl#L2063-L2066

Write a similar docstring that describes the interface on MultiObjectiveOptimizationFunction. The closest interface should be the NonlinearLeastSquares, since NonlinearLeastSquares is a multi-objective optimization that just has a canonical coalesce via coalesce(cost,u,p) = norm(cost,2).

So okay, what should this thing be? It should be as follows. You have an out of place form cost = f(u,p) and an in-place form f!(cost,u,p). You'll have to have a required cost_prototype which is the vector to use for the in-place cost. This means that length(cost) is equal to the number of objectives. isinplace should then be counting for in-place when you have 3 values, thus matching the same setup as NonlinearFunction. You then need to describe the argument structures expected for each of the derivative definitions, which is more or less matching that of NonlinearLeastSquares. And there should be an optional coalesce(cost,u,p) = norm(cost,2) which when provided allows the MOO problem to be solved with a scalar optimization problem under the definition of coalesce(f(u,p).

Once you have that down, we merge, and then let's use that defined interface. You'll need to MultiobjectiveOptimizationFunction(f; cost_prototype = zeros(2)) some definitions, and then the functions here should not require the user pass in the dimension, since that can be known from the function definition itself.

Draft code for Euler's method using Gradient Descent.
Draft-1 tests for the ODE solver-euler's method using gradient descent.
Initial project.toml for ODE package
Updated the code to match the interface structure of OptimizationNLopt.jl
Updated runtests.jl to match the updated OptimizationODE.jl code.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants