Skip to content

Commit

Permalink
nb: Remove possibility of disabling logical datapath groups.
Browse files Browse the repository at this point in the history
In large scale scenarios this option hugely reduces the size of the
Southbound database positively affecting end to end performance.  In
such scenarios there's no real reason to ever disable datapath groups.

In lower scale scenarios any potential overhead due to logical datapath
groups is, very likely, negligible.

Aside from potential scalability concerns, the
NB.NB_Global.options:use_logical_dp_group knob was kept until now to
ensure that in case of a bug in the logical datapath groups code a CMS
may turn it off and fall back to the mode in which logical flows are not
grouped together.  As far as I know, this has never happened until now.

Moreover, datpath_groups are enabled by default since v21.09.0 (4 stable
releases ago), via 90daa7c ("northd: Enable logical dp groups by
default.").

From a testing perspective removing this knob will halve the CI matrix.
This is desirable, especially in the context of more tests being added,
e.g.:
https://patchwork.ozlabs.org/project/ovn/patch/[email protected]/

This commit also adds OVN_FOR_EACH_NORTHD to the "ovn-northd -- lr
multiple gw ports NAT" test case.  That was previously skipped because
the duplicate logical flow would have been merged when running with
datapath groups enabled.  Instead, change the expected output to not
include the duplicate.

Acked-by: Mark Michelson <[email protected]>
Signed-off-by: Dumitru Ceara <[email protected]>
Signed-off-by: Mark Michelson <[email protected]>
  • Loading branch information
dceara authored and putnopvut committed Jul 6, 2022
1 parent 8ab46b7 commit 9dea0c0
Show file tree
Hide file tree
Showing 7 changed files with 47 additions and 177 deletions.
1 change: 1 addition & 0 deletions NEWS
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Post v22.06.0
and ipsec_forceencaps=true (strongswan) to unconditionally enforce
NAT-T UDP encapsulation. Requires OVS support for IPsec custom tunnel
options (which will be available in OVS 2.18).
- Removed possibility of disabling logical datapath groups.

OVN v22.06.0 - 03 Jun 2022
--------------------------
Expand Down
6 changes: 6 additions & 0 deletions TODO.rst
Original file line number Diff line number Diff line change
Expand Up @@ -170,3 +170,9 @@ OVN To-do List
* physical.c has a global simap -localvif_to_ofport which stores the
local OVS interfaces and the ofport numbers. Move this to the engine data
of the engine data node - ed_type_pflow_output.

* ovn-northd parallel logical flow processing

* Multi-threaded logical flow computation was optimized for the case
when datapath groups are disabled. Datpath groups are always enabled
now so northd parallel processing should be revisited.
78 changes: 20 additions & 58 deletions northd/northd.c
Original file line number Diff line number Diff line change
Expand Up @@ -4896,10 +4896,6 @@ ovn_lflow_equal(const struct ovn_lflow *a, const struct ovn_datapath *od,
&& nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
}

/* If this option is 'true' northd will combine logical flows that differ by
* logical datapath only by creating a datapath group. */
static bool use_logical_dp_groups = false;

enum {
STATE_NULL, /* parallelization is off */
STATE_INIT_HASH_SIZES, /* parallelization is on; hashes sizing needed */
Expand All @@ -4924,8 +4920,7 @@ ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
lflow->ctrl_meter = ctrl_meter;
lflow->dpg = NULL;
lflow->where = where;
if ((parallelization_state != STATE_NULL)
&& use_logical_dp_groups) {
if (parallelization_state != STATE_NULL) {
ovs_mutex_init(&lflow->odg_lock);
}
}
Expand All @@ -4935,7 +4930,7 @@ ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
struct ovn_datapath *od)
OVS_NO_THREAD_SAFETY_ANALYSIS
{
if (!use_logical_dp_groups || !lflow_ref) {
if (!lflow_ref) {
return false;
}

Expand Down Expand Up @@ -5014,13 +5009,11 @@ do_ovn_lflow_add(struct hmap *lflow_map, struct ovn_datapath *od,
struct ovn_lflow *old_lflow;
struct ovn_lflow *lflow;

if (use_logical_dp_groups) {
old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
actions, ctrl_meter, hash);
if (old_lflow) {
ovn_dp_group_add_with_reference(old_lflow, od);
return old_lflow;
}
old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
actions, ctrl_meter, hash);
if (old_lflow) {
ovn_dp_group_add_with_reference(old_lflow, od);
return old_lflow;
}

lflow = xmalloc(sizeof *lflow);
Expand Down Expand Up @@ -5076,8 +5069,7 @@ ovn_lflow_add_at_with_hash(struct hmap *lflow_map, struct ovn_datapath *od,
struct ovn_lflow *lflow;

ovs_assert(ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
if (use_logical_dp_groups
&& (parallelization_state == STATE_USE_PARALLELIZATION)) {
if (parallelization_state == STATE_USE_PARALLELIZATION) {
lflow = do_ovn_lflow_add_pd(lflow_map, od, hash, stage, priority,
match, actions, io_port, stage_hint, where,
ctrl_meter);
Expand Down Expand Up @@ -5163,8 +5155,7 @@ static void
ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
{
if (lflow) {
if ((parallelization_state != STATE_NULL)
&& use_logical_dp_groups) {
if (parallelization_state != STATE_NULL) {
ovs_mutex_destroy(&lflow->odg_lock);
}
if (lflows) {
Expand Down Expand Up @@ -13949,29 +13940,18 @@ build_lswitch_and_lrouter_flows(const struct hmap *datapaths,
char *svc_check_match = xasprintf("eth.dst == %s", svc_monitor_mac);

if (parallelization_state == STATE_USE_PARALLELIZATION) {
struct hmap *lflow_segs;
struct lswitch_flow_build_info *lsiv;
int index;

lsiv = xcalloc(sizeof(*lsiv), build_lflows_pool->size);
if (use_logical_dp_groups) {
lflow_segs = NULL;
} else {
lflow_segs = xcalloc(sizeof(*lflow_segs), build_lflows_pool->size);
}

/* Set up "work chunks" for each thread to work on. */

for (index = 0; index < build_lflows_pool->size; index++) {
if (use_logical_dp_groups) {
/* if dp_groups are in use we lock a shared lflows hash
* on a per-bucket level instead of merging hash frags */
lsiv[index].lflows = lflows;
} else {
fast_hmap_init(&lflow_segs[index], lflows->mask);
lsiv[index].lflows = &lflow_segs[index];
}

/* dp_groups are in use so we lock a shared lflows hash
* on a per-bucket level.
*/
lsiv[index].lflows = lflows;
lsiv[index].datapaths = datapaths;
lsiv[index].ports = ports;
lsiv[index].port_groups = port_groups;
Expand All @@ -13990,19 +13970,13 @@ build_lswitch_and_lrouter_flows(const struct hmap *datapaths,
}

/* Run thread pool. */
if (use_logical_dp_groups) {
run_pool_callback(build_lflows_pool, NULL, NULL,
noop_callback);
fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
} else {
run_pool_hash(build_lflows_pool, lflows, lflow_segs);
}
run_pool_callback(build_lflows_pool, NULL, NULL, noop_callback);
fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);

for (index = 0; index < build_lflows_pool->size; index++) {
ds_destroy(&lsiv[index].match);
ds_destroy(&lsiv[index].actions);
}
free(lflow_segs);
free(lsiv);
} else {
struct ovn_datapath *od;
Expand Down Expand Up @@ -14148,21 +14122,15 @@ void run_update_worker_pool(int n_threads)
lflow_hash_lock_destroy();
parallelization_state = STATE_NULL;
} else if (parallelization_state != STATE_USE_PARALLELIZATION) {
if (use_logical_dp_groups) {
lflow_hash_lock_init();
parallelization_state = STATE_INIT_HASH_SIZES;
} else {
parallelization_state = STATE_USE_PARALLELIZATION;
}
lflow_hash_lock_init();
parallelization_state = STATE_INIT_HASH_SIZES;
}
}
}

static void worker_pool_init_for_ldp(void)
static void init_worker_pool(void)
{
/* If parallelization is enabled, make sure locks are initialized
* when ldp are used.
*/
/* If parallelization is enabled, make sure locks are initialized. */
if (parallelization_state != STATE_NULL) {
lflow_hash_lock_init();
parallelization_state = STATE_INIT_HASH_SIZES;
Expand Down Expand Up @@ -15432,13 +15400,6 @@ ovnnb_db_run(struct northd_input *input_data,
nbrec_nb_global_set_options(nb, &options);
}

bool old_use_ldp = use_logical_dp_groups;
use_logical_dp_groups = smap_get_bool(&nb->options,
"use_logical_dp_groups", true);
if (use_logical_dp_groups && !old_use_ldp) {
worker_pool_init_for_ldp();
}

use_ct_inv_match = smap_get_bool(&nb->options,
"use_ct_inv_match", true);

Expand Down Expand Up @@ -15741,6 +15702,7 @@ void northd_run(struct northd_input *input_data,
struct ovsdb_idl_txn *ovnsb_txn)
{
stopwatch_start(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec());
init_worker_pool();
ovnnb_db_run(input_data, data, ovnnb_txn, ovnsb_txn,
input_data->sbrec_chassis_by_name,
input_data->sbrec_chassis_by_hostname);
Expand Down
19 changes: 6 additions & 13 deletions ovn-nb.xml
Original file line number Diff line number Diff line change
Expand Up @@ -196,21 +196,14 @@

<column name="options" key="use_logical_dp_groups">
<p>
If set to <code>true</code>, <code>ovn-northd</code> will combine
logical flows that differs only by logical datapath into a single
logical flow with logical datapath group attached.
Note: This option is deprecated, the only behavior is to always
combine logical flows by datapath groups. Changing the value or
removing this option all toghether will have no effect.
</p>
<p>
While this should significantly reduce number of logical flows stored
in Southbound database this could also increase processing complexity
on the <code>ovn-controller</code> side, e.g.,
<code>ovn-controller</code> will re-consider logical flow for all
logical datapaths in a group. If the option set to
<code>false</code>, there will be separate logical flow per logical
datapath and only this flow will be re-considered.
</p>
<p>
The default value is <code>false</code>.
<code>ovn-northd</code> combines logical flows that differs
only by logical datapath into a single logical flow with
logical datapath group attached.
</p>
</column>
<column name="options" key="use_parallel_build">
Expand Down
21 changes: 3 additions & 18 deletions tests/ovn-macros.at
Original file line number Diff line number Diff line change
Expand Up @@ -246,12 +246,6 @@ ovn_start () {
--ic-nb-db=unix:"$ovs_base"/ovn-ic-nb/ovn-ic-nb.sock \
--ic-sb-db=unix:"$ovs_base"/ovn-ic-sb/ovn-ic-sb.sock
fi

if test X$NORTHD_USE_DP_GROUPS = Xyes; then
ovn-nbctl set NB_Global . options:use_logical_dp_groups=true
else
ovn-nbctl set NB_Global . options:use_logical_dp_groups=false
fi
}

# Interconnection networks.
Expand Down Expand Up @@ -758,15 +752,6 @@ m4_define([OVN_POPULATE_ARP], [AT_CHECK(ovn_populate_arp__, [0], [ignore])])
# datapath groups.
m4_define([OVN_FOR_EACH_NORTHD],
[m4_foreach([NORTHD_TYPE], [ovn-northd],
[m4_foreach([NORTHD_USE_DP_GROUPS], [yes, no],
[m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
[m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
])])])])])

# Some tests aren't prepared for dp groups to be enabled.
m4_define([OVN_FOR_EACH_NORTHD_WITHOUT_DP_GROUPS],
[m4_foreach([NORTHD_TYPE], [ovn-northd],
[m4_foreach([NORTHD_USE_DP_GROUPS], [no],
[m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
[m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
])])])])])
[m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
[m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
])])])])
Loading

0 comments on commit 9dea0c0

Please sign in to comment.