Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

static predicates of a domain are not in Step.State #210

Open
arashHaratian opened this issue Dec 19, 2024 · 8 comments
Open

static predicates of a domain are not in Step.State #210

arashHaratian opened this issue Dec 19, 2024 · 8 comments

Comments

@arashHaratian
Copy link

Hi,

I am working with the following files: dom_files.zip

I tried to get all the grounded predicates from a state, and just noticed that the static predicates (IS-GOAL, IS-NONGOAL, MOVE-DIR) are not in the state.

My code is as follow (I make a Generator and then I would read the IPC plan file. Lastly I create my own transition):

PATH = 'dom_files'

generator = Generator(f'{PATH}/domain.pddl', f'{PATH}/p01.pddl')
plan = generator.generate_plan(True, f'{PATH}/p01.plan')
trace = (generator.generate_single_trace_from_plan(plan))


# GENERATING TRANSITIONS FOR EACH ACTION NAME
transitions = {}  ## ACTION_NAME: [[PRE1, POST1], [PRE2, POST2], ...]
for time_step in range(len(trace)-1):
    current_action_name = trace[time_step].action.name
    current_state = trace[time_step].state
    next_state = trace[time_step + 1].state
    if current_action_name not in transitions.keys():
        transitions[current_action_name] = [[current_state, next_state]]
    else:
        transitions[current_action_name].append([current_state, next_state])

# GETTING ALL THE PREDICATE NAMES
pred_names = set()
pre, post = transitions['push-to-nongoal'][0]
for grounded_pred in pre:
    pred_names.add(grounded_pred.name)

print(pred_names)

Although I am new to MACQ, I am aware that I can make transitions using TraceList.tokenize(IdentityObservation).get_all_transitions(), but still the static predicates are not in the state. Also I searched the documentation and I found out about get_obs_static_fluents() but it just checks for the predicates that are already in the state.

Is there any step that I am missing?
Also is there anyway to read the p01.trajectory file directly?

Regards

@haz
Copy link
Contributor

haz commented Dec 29, 2024

Hey @arashHaratian ! Sorry for the delay on this -- it's taken a bit to get back on top of things at the end of term.

I dug into what's happening here, and it appears to be due to the (fully expected) behaviour of the grounder we use -- LP strategy of tarski. You can see the specific mention of what's happening here...

https://github.com/aig-upf/tarski/blob/master/src/tarski/grounding/lp_grounding.py#L27

...which is coming into play from here:

https://github.com/AI-Planning/macq/blob/main/macq/generate/pddl/generator.py#L182

I think the reasoning was that since those are static predicates, they'll never play a role in model acquisition, and are hence cast away. Do you need them for some other reason?

Also is there anyway to read the p01.trajectory file directly?

Not sure I understand the question -- can you rephrase?

@arashHaratian
Copy link
Author

Dear Christian,
I understand that it is close to the end of the year (also, happy new year :) ).

I will check the provided links and will get back to you if I have anything to add, or ask.

I think the reasoning was that since those are static predicates, they'll never play a role in model acquisition, and are hence cast away. Do you need them for some other reason?

I have seen papers that try to learn the static predicates of the preconditions, and there are papers that they ignore learning them which they aim for their future work. So I would say it should be available in case some work requires that. Is there any solution for getting the static predicates?

Also is there anyway to read the p01.trajectory file directly?

What I meant is that, is there any functionality in macq to create a macq.trace.Trace object by just reading the trace from the *.trajectory files like the one that is attached?

Also, I have another question that can be related to the grounder. In the same example, I realized that the fluents in the states are not all the possible grounded predicates for some reason. I noticed that when I was looking for the value of a grounded predicate, and got the key error problem. is there any reason for that?

@haz
Copy link
Contributor

haz commented Jan 2, 2025

also, happy new year :)

And to you!

I have seen papers that try to learn the static predicates of the preconditions, and there are papers that they ignore learning them which they aim for their future work. So I would say it should be available in case some work requires that. Is there any solution for getting the static predicates?

We'd need to swap out the parser / grounder entirely for that. It's on the long-term plans, but not something easily fixed. An alternative approach might be to find a separate implementation for computing purely static fluents (some syntactic approximations could achieve a lot here -- essentially anything true in the initial state that isn't mentioned in an effect).

Can you point to some of the papers? I'm still a bit confused as to how this would be meaningfully framed in aa model learning setting, but I'm likely just missing something.

What I meant is that, is there any functionality in macq to create a macq.trace.Trace object by just reading the trace from the *.trajectory files like the one that is attached?

Closest would be the CSV option: https://ai-planning.github.io/macq/macq/generate/csv.html

Is that trajectory file format something you found elsewhere? I was unaware of any such standard.

Also, I have another question that can be related to the grounder. In the same example, I realized that the fluents in the states are not all the possible grounded predicates for some reason. I noticed that when I was looking for the value of a grounded predicate, and got the key error problem. is there any reason for that?

Very related to the static fluents being pruned away. For a given initial state and goal, it will discard (or not even generate) and predicate (or ground action) that it can deem unreachable. If you can never put a block on itself, then why represent it? This is the thinking behind planning-based grounding that goes beyond the simple naive approach of slot-filling based on types. If you wanted both static fluents and naive grounding, keep in mind that there would be countless "nonsense fluents" that are permanently false (could never be made true for reasonably defined initial states).

@arashHaratian
Copy link
Author

Thank you for your clear response :)

Can you point to some of the papers?

For instance L1 algorithm that learns the static predicates of the precondition, NLOCM, or SIFT.

Is that trajectory file format something you found elsewhere?

The p01.trajectory that I uploaded in the zip file. But still, since I am new, I am not sure if it is something standardized or not (seems like it is not :) ). you can find [https://github.com/FilutaAI/synthesis-benchmarks](some more examples here).

Regarding not having all the fluents, I understand that there are so many nonsensical fluents, but the learner may not assume that it is already have the reasonable fluents to work with, case in point L1 algorithm.
My problem (or question) is that we have 49 locations, why we do not have 49 clear(?loc - location) fluents? why the algorithm should assume or know that some of these 49 are impossible, while it tries to learn the domain?!

Regards

@haz
Copy link
Contributor

haz commented Jan 2, 2025

For instance L1 algorithm that learns the static predicates of the precondition, NLOCM, or SIFT.

Haven't read the L1 in detail (assuming you mean the KR'24 paper), but just took a quick look and it seems that they'd just absorb every static fact that mentions the action's parameters. So very much akin to what Observer would do. This is kind of arbitrary and damaging from a modeling perspective, as there's so much in there that has nothing to do with the actions. The LOCM suite pulls out static facts, but that's coming from a state-free origin. They don't need the traces with static facts, because we project away to just the action labels before learning begins. Not sure what SIFT refers to :P.

There's a decent argument to be made on metrics for model acquisition that would require faithful recapture of the full state (static facts and all), but we haven't yet expanded MACQ to include metrics for measuring the quality of model acquisition. Even then, I might advise ignoring static facts in the metric (e.g., if you're counting the number of preconditions correctly established), since it can be so arbitrary to nail down. There's nothing the model learning can do to truly know if a fact was there as a static fact precondition, or just happened to be true all the time. At least not until you can probe the action applicability function.

The p01.trajectory that I uploaded in the zip file. But still, since I am new, I am not sure if it is something standardized or not (seems like it is not :) ). you can find [https://github.com/FilutaAI/synthesis-benchmarks](some more examples here).

Ah, got it. Ya, not standardized at all -- they seem to have just used a syntax that would be easy to parse for their work. We'll likely be establishing a json protocol for traces in the upcoming IPC on model acquisition.

Regarding not having all the fluents, I understand that there are so many nonsensical fluents, but the learner may not assume that it is already have the reasonable fluents to work with, case in point L1 algorithm.

Wouldn't L1 just suck up every static fluent that shares the parameters of the conjectured action parameter space? It deals only with positive fluents (a big assumption to begin with), but then every predicate that happens to have the same arguments is automatically assumed to have played a role. It's extremely permissive and lets in loads of spurious stuff.

My problem (or question) is that we have 49 locations, why we do not have 49 clear(?loc - location) fluents? why the algorithm should assume or know that some of these 49 are impossible, while it tries to learn the domain?!

The tarski parse will focus on pruning out things that are provably never going to be true (and perhaps never needed on the way to a goal <-- can't remember if this plays a role).

Ultimately, the tarski parse uses the same logic programming formalism as Fast Downward (or at least nearly identical) to do the grounding in a smart way, since there are countless problems out there that simply can't be grounded otherwise -- the combinatorics on the action parameter space means there are more type-valid groundings than can be reasonably computed (atoms in the universe, blablabla).

Despite all the rhetoric and opinion above, I'm really not in the business of mandating what gets researched + focused on. If you want to throw every static predicate in there, I do believe you should be allowed to! I provide the context above only to try and explain the reasoning to where we've come. A longer-term solution to all this would be:

  1. The pddl library receives a wrapper (or extended functionality) that allows us to ground actions in a type-consistent way (i.e., naive grounding).
  2. We similarly write functionality to apply progression, at least for some subset of the expressivity pddl offers.
  3. We add another backend parser+trace builder for the macq project for those that want to keep everything around.

I'm not convinced the model acquisition research is best served by having them, but again, it's not my call to run the research direction for those outside of my lab ;). Steps 1+2 are things I already would really like to see for separate reasons stemming from work with FOND planners.

@arashHaratian
Copy link
Author

Thank you for all the explanations.

Wouldn't L1 just suck up every static fluent that shares the parameters of the conjectured action parameter space? It deals only with positive fluents (a big assumption to begin with), but then every predicate that happens to have the same arguments is automatically assumed to have played a role. It's extremely permissive and lets in loads of spurious stuff.

I am not sure if I understood it correctly or not, but when the second formula is going to be added to the SAT encoding, the current solution (bindings) should be checked if they are available in pre/post-state of a specific transition. So, when I check the available fluents, sometimes the key is not available. The fix for this is easy, but just it would have been nice to have both the current way of having the fluents and all the ground predicates.

In any case, I appreciate your help. I would suggest to close the issue if you would like to.
in case I would try the suggested solution and would face a problem, I may open another issue.

Thank you.

@haz
Copy link
Contributor

haz commented Jan 6, 2025

I think it's alright to leave this issue open -- eventually we should be able to support both notions of grounding (naive and smart), and the former would hopefully resolve the issue here. I'm plugging away at the repo that would be required to support this, so feel free to keep an eye on things over there: https://github.com/AI-Planning/pddl-utils

@arashHaratian
Copy link
Author

Dear Christian,

I tried the L1 precondition learning part, and I can confirm that the existence of the static predicates in the states are necessary. I did not manage to get the static predicates using macq.

I hope adding static predicates would be in the backlog of the macq at some point in the future :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants