Skip to content

Commit

Permalink
Combine batches of successive roles for same nodes,
Browse files Browse the repository at this point in the history
We can speed-up the application of (n+1)th role if both(n,n+1)
roles are being applied on the same node. This speedup of deployment of
ceilometer by atleast 1m20s (measured: 90sec) and swift by ~20s.

eg. In our 2node deployment ceilometer{server,central} are always
applied on the same node, given that they have different priorities, they are
to be applied, one after the other.

This does not violate any order constraints as the application procedure
of (n+1)th role is transparent to the nth role
  • Loading branch information
Sumit Jamgade committed Oct 19, 2017
1 parent 264b1f1 commit 7d7e7e8
Showing 1 changed file with 32 additions and 0 deletions.
32 changes: 32 additions & 0 deletions crowbar_framework/app/models/service_object.rb
Original file line number Diff line number Diff line change
Expand Up @@ -919,6 +919,36 @@ def self.proposal_to_role(proposal, bc_name)
RoleObject.new role
end

# we can speed-up the application of (n+1)th role if both(n,n+1)
# roles are being applied on the same node
#
# eg. In our 2node deployment ceilometer{server,central} are always
# applied on the same node, given that they have different priorities,
# they are to be applied, one after the other, these priorities come
# from element_run_list_order
#
# In other words: it's actually reducing the number of times chef-client
# is run rather than speeding up execution of any single run, by
# merging batches together
#
# a batch is [roles, nodes]
def merge_batches(batches)
merged_batches = []
unless batches.empty?
current_batch = batches[0]
batches[1..-1].each do |next_batch|
if next_batch[1] == current_batch[1] && !current_batch[0].nil?
current_batch[0] << next_batch[0]
next
end
merged_batches << current_batch
current_batch = next_batch
end
merged_batches << current_batch
end
merged_batches
end

#
# After validation, this is where the role is applied to the system The old
# instance (if one exists) is compared with the new instance. roles are
Expand Down Expand Up @@ -1171,6 +1201,8 @@ def apply_role(role, inst, in_queue, bootstrap = false)

batches << [roles, nodes_in_batch] unless nodes_in_batch.empty?
end

batches = merge_batches(batches)
Rails.logger.debug "batches: #{batches.inspect}"

# Cache attributes that are useful later on
Expand Down

0 comments on commit 7d7e7e8

Please sign in to comment.