-
Notifications
You must be signed in to change notification settings - Fork 995
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes #36942 - Improve the puppet plugin cleanup #9918
base: develop
Are you sure you want to change the base?
Conversation
Note: With this change applied, I tested the following:
Results: No errors while performing these operations |
LGTM based on my yesterday investigation. I haven't tested this version specifically, but it was tested on two older versions of Satellite also by the reporter. |
There is something I would like to add here. It is not directly related to the problem that I hope to fix by this PR but It existed long before . Say,
What happens is that, There is nothing that unsets those two objects for any Hosts or Hostgroups. So when you go to Satellite UI and edit the Host or Hostgroup , we will still see those associations and if you just try to submit them, It will fail saying that puppt proxy does not exist. The way to fix is rather simple:
But I feel like this should run as a migration and as a part of cleanup code in foreman_puppet . We even have customer cases for this scenario. So perhaps, That should be a separate BZ\Redmine\PR ? |
I believe we should treat this as a separate BZ and PR. I believe these columns remained in core, because the association is used also when facts/reprots are uploaded (which is not foreman_puppet functionality). That's also probably why these haven't been moved to the puppet facet. Therefore, since we cleanup Puppet feature from proxies, I believe we should reset the values to nil as you do in you console snippet. That should probably happen as part of the cleanup in foreman-maintain (not only when ran with the -f option). So we should not revert the core migration but we should do the logic you added here in the foreman-maintain procedure. We should probably do it through raw SQL, similar to e.g. this That may be a lot of Ruby, I'd suggest start with opening the issue and we could probably help with that. Let's now keep this PR just for the -f cleanup. |
Noted.. I will do my best about it.. If any help is needed, I will open the issue as suggested and seek assistance. I am good with the current changes in this PR for now. No further changes i have in mind, unless anyone suggests anything else. I noticed a test failure and that seems to be related to Katello ( some rpm related test ):
not sure if i need to do anything here about it. |
Without reviewing the changes themselves, only answering this question:
No, Katello is (sadly) rather unstable and not always related. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It happens because as a part of the cleanup code, we don't clear all the necessary migrations and tables. Due to that, later when someone tries to enable back the plugin, It will assume those migrations are already done and try to run a different migration whose dependency has not been satisfied yet.
The way this was intended to function was to reverse migrations of foreman_puppet, which is done in foreman-maintain:
https://github.com/theforeman/foreman_maintain/blob/b88004abad463cf467cc7e5c4f38c1366193ea21/definitions/procedures/puppet/remove_puppet.rb#L15
That way you don't need to keep 2 repositories in sync, but instead rely on built in functionality. That should drop the tables and remove the SchemaMigration entries. Otherwise, you need to add an entry every time a DB migration is added to foreman_puppet.
Note: With this change applied, I tested the following:
* Enable puppet plugin * Disable + Purge it completely
How did you disable and purge it? If you ran foreman_maintain, is there a log to tell us whether it ran the db:migrate
task?
where(version: %w[20090722141107 20090802062223 20110412103238 20110712070522 | ||
20120824142048 20120905095532 20121018152459 20130725081334 | ||
20140318153157 20140407161817 20140407162007 20140407162059 | ||
20140413123650 20140415032811 20141109131448 20150614171717 | ||
20160307120453 20160626085636 20180831115634 20181023112532 | ||
20181224174419]).delete_all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
20181224174419]).delete_all | ||
20181224174419 20090905150132 20101121140000 20201125113903 | ||
20200803113803 20200722171017 20200803113531 20200803113903 | ||
20211111125003]).delete_all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar, I tried to find these files. They aren't in Foreman:
$ ls db/migrate/20181224174419_* db/migrate/20090905150132_* db/migrate/20101121140000_* db/migrate/20201125113903_* db/migrate/20200803113803_* db/migrate/20200722171017_* db/migrate/20200803113531_* db/migrate/20200803113903_* db/migrate/20211111125003_*
ls: cannot access 'db/migrate/20181224174419_*': No such file or directory
ls: cannot access 'db/migrate/20090905150132_*': No such file or directory
ls: cannot access 'db/migrate/20101121140000_*': No such file or directory
ls: cannot access 'db/migrate/20201125113903_*': No such file or directory
ls: cannot access 'db/migrate/20200803113803_*': No such file or directory
ls: cannot access 'db/migrate/20200722171017_*': No such file or directory
ls: cannot access 'db/migrate/20200803113531_*': No such file or directory
ls: cannot access 'db/migrate/20200803113903_*': No such file or directory
ls: cannot access 'db/migrate/20211111125003_*': No such file or directory
But some are in puppet_foreman:
$ ls -1 db/migrate/20181224174419_* db/migrate/20090905150132_* db/migrate/20101121140000_* db/migrate/20201125113903_* db/migrate/20200803113803_* db/migrate/20200722171017_* db/migrate/20200803113531_* db/migrate/20200803113903_* db/migrate/20211111125003_*
ls: cannot access 'db/migrate/20181224174419_*': No such file or directory
ls: cannot access 'db/migrate/20090905150132_*': No such file or directory
db/migrate/20101121140000_add_environment_to_template_combinations.foreman_puppet.rb
db/migrate/20200722171017_create_host_puppet_facet.foreman_puppet.rb
db/migrate/20200803113531_create_hostgroup_puppet_facet.foreman_puppet.rb
db/migrate/20200803113803_migrate_environment_to_puppet_facet.foreman_puppet.rb
db/migrate/20200803113903_migrate_host_type_in_host_config_groups.foreman_puppet.rb
db/migrate/20201125113903_migrate_puppetclasses_to_facets.foreman_puppet.rb
db/migrate/20211111125003_drop_puppetclasses_direct_references.foreman_puppet.rb
That begs the question: where are 20090905150132
and 20181224174419
from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello,
https://github.com/theforeman/foreman_puppet/tree/master/db/migrate_foreman and https://github.com/theforeman/foreman_puppet/tree/master/db/migrate has the related migration files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah thanks. A comment in this file about how to find these magic numbers will be nice for any future reader.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I will keep this point documented as a best practice..
Hello, I had used foreman-maintain only. But i think , It should be possible to mimic the backend actions executed by foreman-maintain as well "satellite-maintain plugin purge-puppet --remove-all-data" does three things: ########## add_step(Procedures::Puppet::RemovePuppetData) if context.get(:remove_data) ---------> Runs "foreman-rake purge:puppet" add_step(Procedures::Service::Restart) ---------> Restarts services At the very end it regenerates the Apipie cache as well. Foreman-maintain sequence from logs
|
Unrelated, but that's no longer needed since apipie is now always generated dynamically. Can you file a bug for that too?
I get the impression it didn't ran the rake task, but the logging isn't very clear. Perhaps there are better logs in |
It did
I just needed to show you few more lines. https://github.com/theforeman/foreman_maintain/tree/master/definitions/procedures/puppet |
I meant, It did execute DB migrate is only expected to be executed , if i try to enable back puppet plugin after that. DO you need data from that execution ? |
This is when i try to enable back the plugin , after disabling+purging it
|
The apipie cache part has already been handled by this commit in foreman-maintain . Since i was using an older version, For me the code is still present. |
What I tried to explain in #9918 (review) was that I would expect it to happen as part of
You didn't provide logs here, but that's the crucial step where it should happen. I'd expect it to be (based on https://github.com/theforeman/foreman_maintain/blob/b88004abad463cf467cc7e5c4f38c1366193ea21/definitions/procedures/puppet/remove_puppet.rb#L12-L19):
Your comment implies step 2 didn't happen and that's exactly the failure you see: the DB migrations aren't reverted. It's guarded by and if condition with 2 clauses and either one of those could have failed.
Great. |
My bad.. I misunderstood ( as i was juggling between few things ). I apologize for the confusion. It did happen: ~~ The output of the |
Ok, now that's interesting. It does what it's supposed to do, but then doesn't clean up. I would have expected it to remove entries from I think my next step would be to get to the point where the plugin is enabled, do a database dump (in plain text SQL). Then run |
I did so on my setup and added the diff here. If it's not super helpful, I can send you the dumps via email. |
If we have time, we can further debug why the down migration does not clean up the schema_migrations. But I'd get this fix in at least so that until new migrations are added, we unblock the users. If we fix the cause, this patch won't break even afterwards. |
@ekohl any reason to hold this any longer? |
Users , if mistakenly removed puppet plugin forcefully i.e. purged all data related to it, Then cannot re-enable it back.
It happens because as a part of the cleanup code, we don't clear all the necessary migrations and tables. Due to that, later when someone tries to enable back the plugin, It will assume those migrations are already done and try to run a different migration whose dependency has not been satisfied yet.
This PR, helps removing all those migrations from schema_migrations table and the host\hostgroup_puppet_facets tables, which installer is expected to re-run and re-create later when trying to enable back the puppet plugin.
For more details, please refer to the work done in BZ# https://bugzilla.redhat.com/show_bug.cgi?id=2087067
Let me know if you see any migrations included in the PR, that should not be deleted or feels unnecessary to be deleted