Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misleading output from COBYLA / SLSQP with infeasible start value #254

Open
ashander opened this issue Apr 2, 2019 · 2 comments
Open

Misleading output from COBYLA / SLSQP with infeasible start value #254

ashander opened this issue Apr 2, 2019 · 2 comments

Comments

@ashander
Copy link

ashander commented Apr 2, 2019

This is a 2d toy problem with no feasible region within the constraints (x-y > 0 and -1 > x - y; reported as a bug in scipy here
scipy/scipy#7618 )

Using the R interface the problem is:

library(nloptr)
cost <- function(x) -1 * x[1] + 4 * x[2]
in1 <- function(x) x[2] - x[1] - 1
in2 <- function(x) x[1] - x[2]
xl <- c(-5 ,-5)
xu <- c(5, 5)
x_con <- function(x) c(in1(x), in2(x))
x0 <- c(1, 5) # infeasible, as there is no feasible region

Both SLSQP and COBYLA return apparent convergence

SLSQP

slsqp(x0=x0, fn=cost, hin=x_con, lower=xl, upper=xu)
$par
[1] -5 -4

$value
[1] -11

$iter
[1] 5

$convergence
[1] 4

$message
[1] "NLOPT_XTOL_REACHED: Optimization stopped because xtol_rel or xtol_abs
(above) was reached." 

COBYLA

cobyla(x0=x0, fn=cost, hin=x_con, lower=xl, upper=xu)
$par
[1] -4.999999 -4.500001

$value
[1] -13.00001

$iter
[1] 70

$convergence
[1] 4

$message
[1] "NLOPT_XTOL_REACHED: Optimization stopped because xtol_rel or xtol_abs
(above) was reached."
@stevengj
Copy link
Owner

stevengj commented Apr 11, 2019

Yes, the algorithms get confused here. In general, most algorithms only guarantee convergence to a local optimum if they are given a feasible starting point.

It would be nicer to return an error code here. The trick is reliably detecting this case, since in the case of an active constraint some algorithms may approach the feasible set from the outside. An extreme case would be nonlinear equality constraint h(x)=0, in which case the converged value will usually be at best within O(xtol) of the feasible set. So returning an error code simply because the error is infeasible wouldn't be good.

If the user specifies a positive tolerance for the constraint, I suppose we could return an error if the optimum is infeasible within this tolerance. But since the default tolerance is zero I don't know that we should return an error in that case if the return value is slightly infeasible.

@smjaberl
Copy link

Well a workaround is, to check manually, if the solution respect the constrains or not. Checking manually means, to check it after you get the solution of the optimizer.

It would be very nice to get well-defined output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants