Skip to content

Skip can match when batched query execution is available #127471

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 21 commits into
base: main
Choose a base branch
from

Conversation

original-brownbear
Copy link
Member

@original-brownbear original-brownbear commented Apr 28, 2025

Admittedly a little dirty, but intentionally kept close to the exitsting logic, we can clean up and optimize in follow-ups, in any case this is a considerable speedup in essentially all possible use cases (we always save a network roundtrip inlcuding serialization and everything).
There's no reason whatsoever to run can_match (except the coordinator rewrite part of it)
when batched query execution is used.
On a per-node level we can still run it to order the shrds but shoul probably remove its
use from the query phase completely if there's no sort in the query.

relates #125788

No need to do the can_match phase for those nodes that support the batched
query phase. We can get the equivalent degree of pre-filtering by simply
running can_match for sort based queries.
There's no reason whatsoever to run can_match (except the coordinator rewrite part of it)
when batched query execution is used.
On a per-node level we can still run it to order the shrds but shoul probably remove its
use from the query phase completely if there's no sort in the query.
@@ -562,15 +643,39 @@ static void registerNodeSearchAction(
final int searchPoolMax = threadPool.info(ThreadPool.Names.SEARCH).getMax();
transportService.registerRequestHandler(
NODE_SEARCH_ACTION_NAME,
EsExecutors.DIRECT_EXECUTOR_SERVICE,
threadPool.executor(ThreadPool.Names.SEARCH_COORDINATION),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we do can_match here as well we absolutely must fork this to search_coordination, at least for the time being.

shardSearchRequests[i] = r;
var canMatch = searchService.canMatch(r);
if (canMatch.canMatch()) {
r.setRunCanMatchInQueryPhase(false);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit hacky for now and we should clean this logic up further, but somewhat of a nice speedup to save one round of creating a searcher and whatnot and a win we should take right away IMO.

final var shardId = shardToQuery.shardId;
ShardSearchRequest r = state.shardSearchRequests == null ? null : state.shardSearchRequests[translatedIndex];
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit messy too for the time being, but kind of neat to save building the requst again as a win in this step, we can refine this further to make it look nicer and maybe also dedup some of the lucene internal query object building down the line.

@original-brownbear original-brownbear changed the title WIP Skip can match when batched query execution is available Skip can match when batched query execution is available May 1, 2025
@original-brownbear original-brownbear marked this pull request as ready for review May 1, 2025 11:24
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-search-foundations (Team:Search Foundations)

@elasticsearchmachine elasticsearchmachine added the Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch label May 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>non-issue :Search Foundations/Search Catch all for Search Foundations Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch v9.1.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants