Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defining custom gaussian function #86

Open
CarvFS opened this issue Sep 15, 2023 · 3 comments
Open

Defining custom gaussian function #86

CarvFS opened this issue Sep 15, 2023 · 3 comments

Comments

@CarvFS
Copy link

CarvFS commented Sep 15, 2023

Hello,

I want to add a gaussian function to the function_set. I have done so as the other functions already defined (e.g. sigmoid):

def gaussian(x1):
    return np.exp(-np.power(x1, 2))

However, I would like to have it in the form

def gaussian(x1):
    return np.exp(-np.power(a*x1 - b, 2))

with a and b being adjustable parameters. Using mul, sub and const in the function_set eventually I would get a gaussian in this form, but there is a way of enforcing it to all gaussians in the expression?

Sincerely yours,
Felipe Silva Carvalho

@brendenpetersen
Copy link
Collaborator

Sorry I missed this earlier. This is an interesting question. I don't see a simple solution, but it's definitely doable by implementing custom versions of DSO core abstractions.

I think it depends on how you want to obtain values for a and b. Do you want them to be optimized with respect to the reward function every time your expression is evaluated (similarly to how const is optimized), or do you want them to be learned by the algorithm? The advantage of optimized is that you'll likely get better values, but it'll be much more computationally expensive, as there's an inner optimization loop every time you evaluate the reward of an expression with a Gaussian.

I think either way would take a bit of coding on top of the current code, but I think it would make for a nice addition. Inner-loop optimization could mimic the const token. The easiest way might be to treat the Gaussian function as a unary function, then pre-process the traversal to be a function with two const tokens (which have already been implemented). For example, the expression [add, x1, gaussian, sin, x2] would be transformed to [add, x1, exp, mul, -1, n2, sub, mul, const, x1, const] and then each const will be optimized.

Learning the values might be trickier. You could think of the Gaussian as a ternary function, where the arguments are x, a, and b. Then you could force a and b to be constants via a custom Prior. Another tricky part would be that DSO currently only supports up to binary operators. So you'd have to either settle for learning one of the parameters, or you enforce a nested binary function could using an extra function and custom Prior. In this last case, a traversal could look like [Gaussian, x1, ForcedDoubleConstants, value_for_a, value_for_b].

@brendenpetersen
Copy link
Collaborator

brendenpetersen commented Dec 17, 2023

A third way would be to use the poly token. This will be optimized analytically, meaning it is very fast and doesn't require an inner optimization loop. The function [gaussian, poly] (while configuring poly to have max degree 1) would then get you what you want. If you don't want to allow [gaussian, x1] (i.e., you always want to learn the coefficients), then you can add a custom Prior to prevent that. This might be the best way and doesn't actually require changing the code (except for adding guassian to function_set.py).

Slight downside is that the poly token won't work if the function is non-invertable, e.g. [sin, gaussian, poly] is not allowed.

@CarvFS
Copy link
Author

CarvFS commented Dec 18, 2023

Thank you very much for your answer! I will try the third way you mentioned first and see what happens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants