Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #36942 - Improve the puppet plugin cleanup #9918

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

sayan3296
Copy link
Contributor

Users , if mistakenly removed puppet plugin forcefully i.e. purged all data related to it, Then cannot re-enable it back.

It happens because as a part of the cleanup code, we don't clear all the necessary migrations and tables. Due to that, later when someone tries to enable back the plugin, It will assume those migrations are already done and try to run a different migration whose dependency has not been satisfied yet.

This PR, helps removing all those migrations from schema_migrations table and the host\hostgroup_puppet_facets tables, which installer is expected to re-run and re-create later when trying to enable back the puppet plugin.

For more details, please refer to the work done in BZ# https://bugzilla.redhat.com/show_bug.cgi?id=2087067

Let me know if you see any migrations included in the PR, that should not be deleted or feels unnecessary to be deleted

@sayan3296
Copy link
Contributor Author

CC @ares

Hello @ekohl , If you can help review the PR changes, that would be great. Let me know if I am missing something by anychance.

@sayan3296
Copy link
Contributor Author

Note: With this change applied, I tested the following:

  • Enable puppet plugin
  • Disable + Purge it completely
  • Try to enable back
  • If successful, install some puppet modules inside production puppet environment
  • Import the env and modules ( classes ) in sat UI
  • Create a fake host and try selecting puppet related stuff there and assign Puppet class to the host
  • Create a HG and repeat the same.

Results: No errors while performing these operations

@ares
Copy link
Member

ares commented Nov 24, 2023

LGTM based on my yesterday investigation. I haven't tested this version specifically, but it was tested on two older versions of Satellite also by the reporter.

@sayan3296 sayan3296 marked this pull request as ready for review November 24, 2023 08:10
@sayan3296
Copy link
Contributor Author

sayan3296 commented Nov 24, 2023

There is something I would like to add here. It is not directly related to the problem that I hope to fix by this PR but It existed long before .

Say,

  • I have puppet_proxy_id and puppet_ca_proxy_id associated with Hostgroups and Hosts.
  • I have purged puppet or simply disabled puppet plugin

What happens is that, There is nothing that unsets those two objects for any Hosts or Hostgroups. So when you go to Satellite UI and edit the Host or Hostgroup , we will still see those associations and if you just try to submit them, It will fail saying that puppt proxy does not exist.

The way to fix is rather simple:

cat << EOF | foreman-rake console
::Hostgroup.update_all(puppet_proxy_id: nil) if ActiveRecord::Base.connection.column_exists?(:hostgroups, :puppet_proxy_id)
::Hostgroup.update_all(puppet_ca_proxy_id: nil) if ActiveRecord::Base.connection.column_exists?(:hostgroups, :puppet_ca_proxy_id)
Host::Managed.update_all(puppet_proxy_id: nil) if ActiveRecord::Base.connection.column_exists?(:hosts, :puppet_proxy_id)
Host::Managed.update_all(puppet_ca_proxy_id: nil) if ActiveRecord::Base.connection.column_exists?(:hosts, :puppet_ca_proxy_id)
EOF

But I feel like this should run as a migration and as a part of cleanup code in foreman_puppet . We even have customer cases for this scenario. So perhaps, That should be a separate BZ\Redmine\PR ?

@ares
Copy link
Member

ares commented Nov 24, 2023

There is something I would like to add here. It is not directly related to the problem that I hope to fix by this PR but It existed long before .

I believe we should treat this as a separate BZ and PR. I believe these columns remained in core, because the association is used also when facts/reprots are uploaded (which is not foreman_puppet functionality). That's also probably why these haven't been moved to the puppet facet.

Therefore, since we cleanup Puppet feature from proxies, I believe we should reset the values to nil as you do in you console snippet. That should probably happen as part of the cleanup in foreman-maintain (not only when ran with the -f option). So we should not revert the core migration but we should do the logic you added here in the foreman-maintain procedure. We should probably do it through raw SQL, similar to e.g. this

That may be a lot of Ruby, I'd suggest start with opening the issue and we could probably help with that. Let's now keep this PR just for the -f cleanup.

@sayan3296
Copy link
Contributor Author

Noted.. I will do my best about it.. If any help is needed, I will open the issue as suggested and seek assistance.

I am good with the current changes in this PR for now. No further changes i have in mind, unless anyone suggests anything else.

I noticed a test failure and that seems to be related to Katello ( some rpm related test ):

2023-11-23T18:54:37.313Z] 
[2023-11-23T18:54:37.313Z] Failure:
[2023-11-23T18:54:37.313Z] Katello::RpmTest#test_with_search [/home/jenkins/workspace/foreman-pr-katello-test/test/models/rpm_test.rb:50]
[2023-11-23T18:54:37.313Z] Minitest::Assertion: --- expected
[2023-11-23T18:54:37.313Z] +++ actual
[2023-11-23T18:54:37.313Z] @@ -1 +1 @@
[2023-11-23T18:54:37.313Z] -[#<Katello::Rpm id: 980190962, 

not sure if i need to do anything here about it.

@ekohl
Copy link
Member

ekohl commented Nov 24, 2023

Without reviewing the changes themselves, only answering this question:

not sure if i need to do anything here about it.

No, Katello is (sadly) rather unstable and not always related.

Copy link
Member

@ekohl ekohl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It happens because as a part of the cleanup code, we don't clear all the necessary migrations and tables. Due to that, later when someone tries to enable back the plugin, It will assume those migrations are already done and try to run a different migration whose dependency has not been satisfied yet.

The way this was intended to function was to reverse migrations of foreman_puppet, which is done in foreman-maintain:
https://github.com/theforeman/foreman_maintain/blob/b88004abad463cf467cc7e5c4f38c1366193ea21/definitions/procedures/puppet/remove_puppet.rb#L15

That way you don't need to keep 2 repositories in sync, but instead rely on built in functionality. That should drop the tables and remove the SchemaMigration entries. Otherwise, you need to add an entry every time a DB migration is added to foreman_puppet.

Note: With this change applied, I tested the following:

* Enable puppet plugin

* Disable + Purge it completely

How did you disable and purge it? If you ran foreman_maintain, is there a log to tell us whether it ran the db:migrate task?

Comment on lines 57 to -60
where(version: %w[20090722141107 20090802062223 20110412103238 20110712070522
20120824142048 20120905095532 20121018152459 20130725081334
20140318153157 20140407161817 20140407162007 20140407162059
20140413123650 20140415032811 20141109131448 20150614171717
20160307120453 20160626085636 20180831115634 20181023112532
20181224174419]).delete_all
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to find these migrations, but I can't find them as files. The lines were added in f0b19a2. Actual files were dropped in 8c3d16e & a18a518.

Dropping these doesn't hurt, but since they wouldn't be recreated.

20181224174419]).delete_all
20181224174419 20090905150132 20101121140000 20201125113903
20200803113803 20200722171017 20200803113531 20200803113903
20211111125003]).delete_all
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar, I tried to find these files. They aren't in Foreman:

$ ls db/migrate/20181224174419_* db/migrate/20090905150132_* db/migrate/20101121140000_* db/migrate/20201125113903_* db/migrate/20200803113803_* db/migrate/20200722171017_* db/migrate/20200803113531_* db/migrate/20200803113903_* db/migrate/20211111125003_* 
ls: cannot access 'db/migrate/20181224174419_*': No such file or directory
ls: cannot access 'db/migrate/20090905150132_*': No such file or directory
ls: cannot access 'db/migrate/20101121140000_*': No such file or directory
ls: cannot access 'db/migrate/20201125113903_*': No such file or directory
ls: cannot access 'db/migrate/20200803113803_*': No such file or directory
ls: cannot access 'db/migrate/20200722171017_*': No such file or directory
ls: cannot access 'db/migrate/20200803113531_*': No such file or directory
ls: cannot access 'db/migrate/20200803113903_*': No such file or directory
ls: cannot access 'db/migrate/20211111125003_*': No such file or directory

But some are in puppet_foreman:

$ ls -1 db/migrate/20181224174419_* db/migrate/20090905150132_* db/migrate/20101121140000_* db/migrate/20201125113903_* db/migrate/20200803113803_* db/migrate/20200722171017_* db/migrate/20200803113531_* db/migrate/20200803113903_* db/migrate/20211111125003_* 
ls: cannot access 'db/migrate/20181224174419_*': No such file or directory
ls: cannot access 'db/migrate/20090905150132_*': No such file or directory
db/migrate/20101121140000_add_environment_to_template_combinations.foreman_puppet.rb
db/migrate/20200722171017_create_host_puppet_facet.foreman_puppet.rb
db/migrate/20200803113531_create_hostgroup_puppet_facet.foreman_puppet.rb
db/migrate/20200803113803_migrate_environment_to_puppet_facet.foreman_puppet.rb
db/migrate/20200803113903_migrate_host_type_in_host_config_groups.foreman_puppet.rb
db/migrate/20201125113903_migrate_puppetclasses_to_facets.foreman_puppet.rb
db/migrate/20211111125003_drop_puppetclasses_direct_references.foreman_puppet.rb

That begs the question: where are 20090905150132 and 20181224174419 from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah thanks. A comment in this file about how to find these magic numbers will be nice for any future reader.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I will keep this point documented as a best practice..

@sayan3296
Copy link
Contributor Author

It happens because as a part of the cleanup code, we don't clear all the necessary migrations and tables. Due to that, later when someone tries to enable back the plugin, It will assume those migrations are already done and try to run a different migration whose dependency has not been satisfied yet.

The way this was intended to function was to reverse migrations of foreman_puppet, which is done in foreman-maintain: https://github.com/theforeman/foreman_maintain/blob/b88004abad463cf467cc7e5c4f38c1366193ea21/definitions/procedures/puppet/remove_puppet.rb#L15

That way you don't need to keep 2 repositories in sync, but instead rely on built in functionality. That should drop the tables and remove the SchemaMigration entries. Otherwise, you need to add an entry every time a DB migration is added to foreman_puppet.

Note: With this change applied, I tested the following:

* Enable puppet plugin

* Disable + Purge it completely

How did you disable and purge it? If you ran foreman_maintain, is there a log to tell us whether it ran the db:migrate task?

Hello,

I had used foreman-maintain only. But i think , It should be possible to mimic the backend actions executed by foreman-maintain as well

"satellite-maintain plugin purge-puppet --remove-all-data" does three things:

##########
add_step(Procedures::Puppet::RemovePuppet) ------------------------------------------> Stop some services and then installer based disabling of puppet stuff
and removes puppetserver/rubygem-foreman_puppet/rubygem-hammer_cli_foreman_puppet rpms

add_step(Procedures::Puppet::RemovePuppetData) if context.get(:remove_data) ---------> Runs "foreman-rake purge:puppet"

add_step(Procedures::Service::Restart) ---------> Restarts services
##########

At the very end it regenerates the Apipie cache as well.

Foreman-maintain sequence from logs

I, [2023-11-24 16:30:23+0530 #155557]  INFO -- : Running foreman-maintain command with arguments [["plugin", "purge-puppet", "-f"]]
I, [2023-11-24 16:30:23+0530 #155557]  INFO -- : === Scenario 'Remove Puppet feature' started ===
I, [2023-11-24 16:30:23+0530 #155557]  INFO -- : --- Execution step 'Check for Puppet capsules from the database' [puppet-capsules] started ---
I, [2023-11-24 16:30:23+0530 #155557]  INFO -- : --- Execution step 'Check for Puppet capsules from the database' finished ---
I, [2023-11-24 16:30:23+0530 #155557]  INFO -- : --- Execution step 'Remove Puppet feature' [puppet-remove-puppet] started ---
I, [2023-11-24 16:30:38+0530 #155711]  INFO -- : Running foreman-maintain command with arguments [["packages", "-h"]]
I, [2023-11-24 16:30:42+0530 #155942]  INFO -- : Running foreman-maintain command with arguments [["packages", "is-locked"]]
I, [2023-11-24 16:30:43+0530 #156049]  INFO -- : Running foreman-maintain command with arguments [["packages", "unlock", "--assumeyes"]]
I, [2023-11-24 16:30:43+0530 #156049]  INFO -- : === Scenario 'unlocking of package versions' started ===
I, [2023-11-24 16:30:43+0530 #156049]  INFO -- : --- Execution step 'Unlock packages' [packages-unlock-versions] started ---
I, [2023-11-24 16:30:43+0530 #156049]  INFO -- : --- Execution step 'Unlock packages' finished ---
I, [2023-11-24 16:30:43+0530 #156049]  INFO -- : === Scenario 'unlocking of package versions' finished ===
I, [2023-11-24 16:34:16+0530 #157278]  INFO -- : Running foreman-maintain command with arguments [["packages", "lock", "--assumeyes"]]
I, [2023-11-24 16:34:16+0530 #157278]  INFO -- : === Scenario 'locking of package versions' started ===
I, [2023-11-24 16:34:16+0530 #157278]  INFO -- : --- Execution step 'Lock packages' [packages-lock-versions] started ---
I, [2023-11-24 16:34:16+0530 #157278]  INFO -- : --- Execution step 'Lock packages' finished ---
I, [2023-11-24 16:34:16+0530 #157278]  INFO -- : === Scenario 'locking of package versions' finished ===
I, [2023-11-24 16:34:19+0530 #155557]  INFO -- : --- Execution step 'Remove Puppet feature' finished ---
I, [2023-11-24 16:34:19+0530 #155557]  INFO -- : --- Execution step 'Remove Puppet data' [puppet-remove-puppet-data] started ---
I, [2023-11-24 16:34:37+0530 #155557]  INFO -- : --- Execution step 'Remove Puppet data' finished ---
I, [2023-11-24 16:34:37+0530 #155557]  INFO -- : --- Execution step 'Restart applicable services' [service-restart] started ---
I, [2023-11-24 16:35:49+0530 #155557]  INFO -- : --- Execution step 'Restart applicable services' finished ---
I, [2023-11-24 16:35:49+0530 #155557]  INFO -- : --- Execution step 'Regenerate Apipie cache' [foreman-apipie-cache] started ---
I, [2023-11-24 16:36:35+0530 #155557]  INFO -- : --- Execution step 'Regenerate Apipie cache' finished ---
I, [2023-11-24 16:36:35+0530 #155557]  INFO -- : === Scenario 'Remove Puppet feature' finished ===

@ekohl
Copy link
Member

ekohl commented Nov 24, 2023

At the very end it regenerates the Apipie cache as well.

Unrelated, but that's no longer needed since apipie is now always generated dynamically. Can you file a bug for that too?

Foreman-maintain sequence from logs

I get the impression it didn't ran the rake task, but the logging isn't very clear. Perhaps there are better logs in /var/log/foreman/db_migrate.log.

@sayan3296
Copy link
Contributor Author

At the very end it regenerates the Apipie cache as well.

Unrelated, but that's no longer needed since apipie is now always generated dynamically. Can you file a bug for that too?

Foreman-maintain sequence from logs

I get the impression it didn't ran the rake task, but the logging isn't very clear. Perhaps there are better logs in /var/log/foreman/db_migrate.log.

It did

I, [2023-11-24 16:34:19+0530 #155557]  INFO -- : --- Execution step 'Remove Puppet feature' finished ---
I, [2023-11-24 16:34:19+0530 #155557]  INFO -- : --- Execution step 'Remove Puppet data' [puppet-remove-puppet-data] started ---
D, [2023-11-24 16:34:19+0530 #155557] DEBUG -- : Running command foreman-rake purge:puppet with stdin nil
D, [2023-11-24 16:34:37+0530 #155557] DEBUG -- : output of the command:
 
D, [2023-11-24 16:34:37+0530 #155557] DEBUG -- : Running command rm -r /etc/puppetlabs /opt/puppetlabs/server/data with stdin nil
D, [2023-11-24 16:34:37+0530 #155557] DEBUG -- : output of the command:
 

I just needed to show you few more lines.

https://github.com/theforeman/foreman_maintain/tree/master/definitions/procedures/puppet

@sayan3296
Copy link
Contributor Author

At the very end it regenerates the Apipie cache as well.

Unrelated, but that's no longer needed since apipie is now always generated dynamically. Can you file a bug for that too?

Foreman-maintain sequence from logs

I get the impression it didn't ran the rake task, but the logging isn't very clear. Perhaps there are better logs in /var/log/foreman/db_migrate.log.

It did

I, [2023-11-24 16:34:19+0530 #155557]  INFO -- : --- Execution step 'Remove Puppet feature' finished ---
I, [2023-11-24 16:34:19+0530 #155557]  INFO -- : --- Execution step 'Remove Puppet data' [puppet-remove-puppet-data] started ---
D, [2023-11-24 16:34:19+0530 #155557] DEBUG -- : Running command foreman-rake purge:puppet with stdin nil
D, [2023-11-24 16:34:37+0530 #155557] DEBUG -- : output of the command:
 
D, [2023-11-24 16:34:37+0530 #155557] DEBUG -- : Running command rm -r /etc/puppetlabs /opt/puppetlabs/server/data with stdin nil
D, [2023-11-24 16:34:37+0530 #155557] DEBUG -- : output of the command:
 

I just needed to show you few more lines.

https://github.com/theforeman/foreman_maintain/tree/master/definitions/procedures/puppet

I meant, It did execute foreman-rake purge:puppet as expected.

DB migrate is only expected to be executed , if i try to enable back puppet plugin after that. DO you need data from that execution ?

@sayan3296
Copy link
Contributor Author

This is when i try to enable back the plugin , after disabling+purging it

2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless: Run `bin/rails db:migrate` to update your database then try again.
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless: You have 21 pending migrations:
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20090722141107 CreateEnvironments
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20090802062223 CreatePuppetclasses
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20090905150132 CreateHostgroupsPuppetclasses
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20101121140000 AddEnvironmentToTemplateCombinations
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20110412103238 RemoveUnusedFieldsFromPuppetClasses
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20110712070522 CreateHostClass
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20120824142048 AddSomeIndexes
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20120905095532 CreateEnvironmentClasses
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20121018152459 CreateHostgroupClasses
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20140407161817 CreateConfigGroups
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20140407162007 CreateConfigGroupClasses
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20140407162059 CreateHostConfigGroups
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20180831115634 AddUniquenessToPuppetclassName
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20181023112532 AddEnvironmentPuppetclassId
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20181224174419 AddIndexToEnvironmentClassByLookupKeyAndPuppetclass
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20200722171017 CreateHostPuppetFacet
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20200803113531 CreateHostgroupPuppetFacet
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20200803113803 MigrateEnvironmentToPuppetFacet
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20200803113903 MigrateHostTypeInHostConfigGroups
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20201125113903 MigratePuppetclassesToFacets
2023-11-24 17:12:40 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless:   20211111125003 DropPuppetclassesDirectReferences
2023-11-24 17:12:40 [DEBUG ] [configure] Exec[foreman-rake-db:migrate](provider=posix): Executing '/usr/sbin/foreman-rake db:migrate'
2023-11-24 17:12:40 [DEBUG ] [configure] Executing with uid=foreman: '/usr/sbin/foreman-rake db:migrate'
..
..
2023-11-24 17:12:53 [INFO  ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: executed successfully
2023-11-24 17:12:53 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]: The container Foreman::Rake[db:migrate] will propagate my refresh event
2023-11-24 17:12:53 [DEBUG ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]: Evaluated in 30.97 seconds

@sayan3296
Copy link
Contributor Author

The apipie cache part has already been handled by this commit in foreman-maintain . Since i was using an older version, For me the code is still present.

@ekohl
Copy link
Member

ekohl commented Nov 24, 2023

I meant, It did execute foreman-rake purge:puppet as expected.

What I tried to explain in #9918 (review) was that I would expect it to happen as part of Procedures::Puppet::RemovePuppet. So you said it ran:

add_step(Procedures::Puppet::RemovePuppet) ------------------------------------------> Stop some services and then installer based disabling of puppet stuff

You didn't provide logs here, but that's the crucial step where it should happen. I'd expect it to be (based on https://github.com/theforeman/foreman_maintain/blob/b88004abad463cf467cc7e5c4f38c1366193ea21/definitions/procedures/puppet/remove_puppet.rb#L12-L19):

  1. Stop services
  2. Revert DB migrations: foreman-rake db:migrate VERSION=0 SCOPE=foreman_puppet
  3. Run installer to disable
  4. Remove package: yum remove rubygem-foreman_puppet

Your comment implies step 2 didn't happen and that's exactly the failure you see: the DB migrations aren't reverted. It's guarded by and if condition with 2 clauses and either one of those could have failed.

The apipie cache part has already been handled by this commit in foreman-maintain . Since i was using an older version, For me the code is still present.

Great.

@sayan3296
Copy link
Contributor Author

My bad.. I misunderstood ( as i was juggling between few things ). I apologize for the confusion.

It did happen:
~~
D, [2023-11-24 16:30:23+0530 #155557] DEBUG -- : Running command foreman-rake db:migrate VERSION=0 SCOPE=foreman_puppet with stdin nil
D, [2023-11-24 16:34:19+0530 #155557] DEBUG -- : Running command foreman-rake purge:puppet with stdin nil
D, [2023-11-24 16:35:49+0530 #155557] DEBUG -- : Running command FOREMAN_APIPIE_LANGS=en foreman-rake apipie:cache with stdin nil

~~

The output of the foreman-rake db:migrate VERSION=0 SCOPE=foreman_puppet execution from logs, is captured in https://gist.github.com/sayan3296/2f04291d37cb284d5ab087e96c07039a

@ekohl
Copy link
Member

ekohl commented Nov 24, 2023

Ok, now that's interesting. It does what it's supposed to do, but then doesn't clean up. I would have expected it to remove entries from schema_migrations and drop the correct columns.

I think my next step would be to get to the point where the plugin is enabled, do a database dump (in plain text SQL). Then run foreman-rake db:migrate VERSION=0 SCOPE=foreman_puppet and do another database dump. Compare those two to see what exactly happened, because it's not doing what Ondrej wanted it to do.

@sayan3296
Copy link
Contributor Author

Ok, now that's interesting. It does what it's supposed to do, but then doesn't clean up. I would have expected it to remove entries from schema_migrations and drop the correct columns.

I think my next step would be to get to the point where the plugin is enabled, do a database dump (in plain text SQL). Then run foreman-rake db:migrate VERSION=0 SCOPE=foreman_puppet and do another database dump. Compare those two to see what exactly happened, because it's not doing what Ondrej wanted it to do.

I did so on my setup and added the diff here.

If it's not super helpful, I can send you the dumps via email.

@ares
Copy link
Member

ares commented Dec 4, 2023

If we have time, we can further debug why the down migration does not clean up the schema_migrations. But I'd get this fix in at least so that until new migrations are added, we unblock the users. If we fix the cause, this patch won't break even afterwards.

@ares
Copy link
Member

ares commented Jan 17, 2024

@ekohl any reason to hold this any longer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants