-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
static predicates of a domain are not in Step.State #210
Comments
Hey @arashHaratian ! Sorry for the delay on this -- it's taken a bit to get back on top of things at the end of term. I dug into what's happening here, and it appears to be due to the (fully expected) behaviour of the grounder we use -- LP strategy of https://github.com/aig-upf/tarski/blob/master/src/tarski/grounding/lp_grounding.py#L27 ...which is coming into play from here: https://github.com/AI-Planning/macq/blob/main/macq/generate/pddl/generator.py#L182 I think the reasoning was that since those are static predicates, they'll never play a role in model acquisition, and are hence cast away. Do you need them for some other reason?
Not sure I understand the question -- can you rephrase? |
Dear Christian, I will check the provided links and will get back to you if I have anything to add, or ask.
I have seen papers that try to learn the static predicates of the preconditions, and there are papers that they ignore learning them which they aim for their future work. So I would say it should be available in case some work requires that. Is there any solution for getting the static predicates?
What I meant is that, is there any functionality in macq to create a Also, I have another question that can be related to the grounder. In the same example, I realized that the fluents in the states are not all the possible grounded predicates for some reason. I noticed that when I was looking for the value of a grounded predicate, and got the key error problem. is there any reason for that? |
And to you!
We'd need to swap out the parser / grounder entirely for that. It's on the long-term plans, but not something easily fixed. An alternative approach might be to find a separate implementation for computing purely static fluents (some syntactic approximations could achieve a lot here -- essentially anything true in the initial state that isn't mentioned in an effect). Can you point to some of the papers? I'm still a bit confused as to how this would be meaningfully framed in aa model learning setting, but I'm likely just missing something.
Closest would be the CSV option: https://ai-planning.github.io/macq/macq/generate/csv.html Is that trajectory file format something you found elsewhere? I was unaware of any such standard.
Very related to the static fluents being pruned away. For a given initial state and goal, it will discard (or not even generate) and predicate (or ground action) that it can deem unreachable. If you can never put a block on itself, then why represent it? This is the thinking behind planning-based grounding that goes beyond the simple naive approach of slot-filling based on types. If you wanted both static fluents and naive grounding, keep in mind that there would be countless "nonsense fluents" that are permanently false (could never be made true for reasonably defined initial states). |
Thank you for your clear response :)
For instance L1 algorithm that learns the static predicates of the precondition, NLOCM, or SIFT.
The Regarding not having all the fluents, I understand that there are so many nonsensical fluents, but the learner may not assume that it is already have the reasonable fluents to work with, case in point L1 algorithm. Regards |
Haven't read the L1 in detail (assuming you mean the KR'24 paper), but just took a quick look and it seems that they'd just absorb every static fact that mentions the action's parameters. So very much akin to what Observer would do. This is kind of arbitrary and damaging from a modeling perspective, as there's so much in there that has nothing to do with the actions. The LOCM suite pulls out static facts, but that's coming from a state-free origin. They don't need the traces with static facts, because we project away to just the action labels before learning begins. Not sure what SIFT refers to :P. There's a decent argument to be made on metrics for model acquisition that would require faithful recapture of the full state (static facts and all), but we haven't yet expanded MACQ to include metrics for measuring the quality of model acquisition. Even then, I might advise ignoring static facts in the metric (e.g., if you're counting the number of preconditions correctly established), since it can be so arbitrary to nail down. There's nothing the model learning can do to truly know if a fact was there as a static fact precondition, or just happened to be true all the time. At least not until you can probe the action applicability function.
Ah, got it. Ya, not standardized at all -- they seem to have just used a syntax that would be easy to parse for their work. We'll likely be establishing a json protocol for traces in the upcoming IPC on model acquisition.
Wouldn't L1 just suck up every static fluent that shares the parameters of the conjectured action parameter space? It deals only with positive fluents (a big assumption to begin with), but then every predicate that happens to have the same arguments is automatically assumed to have played a role. It's extremely permissive and lets in loads of spurious stuff.
The tarski parse will focus on pruning out things that are provably never going to be true (and perhaps never needed on the way to a goal <-- can't remember if this plays a role). Ultimately, the tarski parse uses the same logic programming formalism as Fast Downward (or at least nearly identical) to do the grounding in a smart way, since there are countless problems out there that simply can't be grounded otherwise -- the combinatorics on the action parameter space means there are more type-valid groundings than can be reasonably computed (atoms in the universe, blablabla). Despite all the rhetoric and opinion above, I'm really not in the business of mandating what gets researched + focused on. If you want to throw every static predicate in there, I do believe you should be allowed to! I provide the context above only to try and explain the reasoning to where we've come. A longer-term solution to all this would be:
I'm not convinced the model acquisition research is best served by having them, but again, it's not my call to run the research direction for those outside of my lab ;). Steps 1+2 are things I already would really like to see for separate reasons stemming from work with FOND planners. |
Thank you for all the explanations.
I am not sure if I understood it correctly or not, but when the second formula is going to be added to the SAT encoding, the current solution (bindings) should be checked if they are available in pre/post-state of a specific transition. So, when I check the available fluents, sometimes the key is not available. The fix for this is easy, but just it would have been nice to have both the current way of having the fluents and all the ground predicates. In any case, I appreciate your help. I would suggest to close the issue if you would like to. Thank you. |
I think it's alright to leave this issue open -- eventually we should be able to support both notions of grounding (naive and smart), and the former would hopefully resolve the issue here. I'm plugging away at the repo that would be required to support this, so feel free to keep an eye on things over there: https://github.com/AI-Planning/pddl-utils |
Dear Christian, I tried the L1 precondition learning part, and I can confirm that the existence of the static predicates in the states are necessary. I did not manage to get the static predicates using macq. I hope adding static predicates would be in the backlog of the macq at some point in the future :) |
Hi,
I am working with the following files: dom_files.zip
I tried to get all the grounded predicates from a state, and just noticed that the static predicates (
IS-GOAL
,IS-NONGOAL
,MOVE-DIR
) are not in the state.My code is as follow (I make a
Generator
and then I would read the IPC plan file. Lastly I create my own transition):Although I am new to MACQ, I am aware that I can make transitions using
TraceList.tokenize(IdentityObservation).get_all_transitions()
, but still the static predicates are not in the state. Also I searched the documentation and I found out aboutget_obs_static_fluents()
but it just checks for the predicates that are already in the state.Is there any step that I am missing?
Also is there anyway to read the
p01.trajectory
file directly?Regards
The text was updated successfully, but these errors were encountered: