Skip to content

Commit

Permalink
Merge branch 'master' into enhancement_nereids_kill-query-support-union
Browse files Browse the repository at this point in the history
  • Loading branch information
Yao-MR authored Dec 19, 2024
2 parents 5c19301 + a05582f commit 627fcfd
Show file tree
Hide file tree
Showing 3,357 changed files with 132,305 additions and 24,489 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
7 changes: 1 addition & 6 deletions .asf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,15 +56,13 @@ github:
- cloud_p0 (Doris Cloud Regression)
- FE UT (Doris FE UT)
- BE UT (Doris BE UT)
- Build Broker
- ShellCheck
- Build Broker
- Build Third Party Libraries (Linux)
- Build Third Party Libraries (macOS)
- Build Third Party Libraries (macOS-arm64)
- COMPILE (DORIS_COMPILE)
- Need_2_Approval
- Cloud UT (Doris Cloud UT)
- performance (Doris Performance)

required_pull_request_reviews:
dismiss_stale_reviews: true
Expand All @@ -80,7 +78,6 @@ github:
- Clang Formatter
- CheckStyle
- Build Broker
- ShellCheck
- Build Third Party Libraries (Linux)
- Build Third Party Libraries (macOS)
- FE UT (Doris FE UT)
Expand All @@ -103,7 +100,6 @@ github:
- Clang Formatter
- CheckStyle
- Build Broker
- ShellCheck
- Build Third Party Libraries (Linux)
- Build Third Party Libraries (macOS)
- COMPILE (DORIS_COMPILE)
Expand All @@ -128,7 +124,6 @@ github:
- FE UT (Doris FE UT)
- BE UT (Doris BE UT)
- Build Broker
- ShellCheck
- Build Third Party Libraries (Linux)
- Build Third Party Libraries (macOS)
- COMPILE (DORIS_COMPILE)
Expand Down
11 changes: 6 additions & 5 deletions .github/workflows/auto-cherry-pick.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ on:
pull_request_target:
types:
- closed
- labeled
branches:
- master
permissions:
Expand All @@ -30,7 +31,7 @@ permissions:
jobs:
auto_cherry_pick:
runs-on: ubuntu-latest
if: ${{ (contains(github.event.pull_request.labels.*.name, 'dev/3.0.x') || contains(github.event.pull_request.labels.*.name, 'dev/2.1.x')) && github.event.pull_request.merged == true }}
if: ${{(contains(github.event.pull_request.labels.*.name, 'dev/3.0.x') || contains(github.event.pull_request.labels.*.name, 'dev/2.1.x') ||github.event.label.name == 'dev/3.0.x' || github.event.label.name == 'dev/2.1.x') && github.event.pull_request.merged == true }}
steps:
- name: Checkout repository
uses: actions/checkout@v3
Expand All @@ -54,18 +55,18 @@ jobs:
echo "SHA matches: $calculated_sha"
fi
- name: Auto cherry-pick to branch-3.0
if: ${{ contains(github.event.pull_request.labels.*.name, 'dev/3.0.x') }}
if: ${{ ((github.event.action == 'labeled' && github.event.label.name == 'dev/3.0.x'))|| ((github.event_name == 'pull_request_target' && github.event.action == 'closed') && contains(github.event.pull_request.labels.*.name, 'dev/3.0.x')) }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
REPO_NAME: ${{ github.repository }}
CONFLICT_LABEL: cherry-pick-conflict-in-3.0
CONFLICT_LABEL: dev/3.0.x-conflict
run: |
python tools/auto-pick-script.py ${{ github.event.pull_request.number }} branch-3.0
- name: Auto cherry-pick to branch-2.1
if: ${{ contains(github.event.pull_request.labels.*.name, 'dev/2.1.x') }}
if: ${{ ((github.event.action == 'labeled' && github.event.label.name == 'dev/2.1.x'))|| ((github.event_name == 'pull_request_target' && github.event.action == 'closed') && contains(github.event.pull_request.labels.*.name, 'dev/2.1.x')) }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
REPO_NAME: ${{ github.repository }}
CONFLICT_LABEL: cherry-pick-conflict-in-2.1.x
CONFLICT_LABEL: dev/2.1.x-conflict
run: |
python tools/auto-pick-script.py ${{ github.event.pull_request.number }} branch-2.1
9 changes: 2 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,15 +177,10 @@ In terms of optimizers, Doris uses a combination of CBO and RBO. RBO supports co

**Apache Doris has graduated from Apache incubator successfully and become a Top-Level Project in June 2022**.

Currently, the Apache Doris community has gathered more than 400 contributors from nearly 200 companies in different industries, and the number of active contributors is close to 100 per month.


[![Monthly Active Contributors](https://contributor-overtime-api.apiseven.com/contributors-svg?chart=contributorMonthlyActivity&repo=apache/doris)](https://www.apiseven.com/en/contributor-graph?chart=contributorMonthlyActivity&repo=apache/doris)

[![Contributor over time](https://contributor-overtime-api.apiseven.com/contributors-svg?chart=contributorOverTime&repo=apache/doris)](https://www.apiseven.com/en/contributor-graph?chart=contributorOverTime&repo=apache/doris)

We deeply appreciate 🔗[community contributors](https://github.com/apache/doris/graphs/contributors) for their contribution to Apache Doris.

[![contrib graph](https://contrib.rocks/image?repo=apache/doris)](https://github.com/apache/doris/graphs/contributors)

## 👨‍👩‍👧‍👦 Users

Apache Doris now has a wide user base in China and around the world, and as of today, **Apache Doris is used in production environments in thousands of companies worldwide.** More than 80% of the top 50 Internet companies in China in terms of market capitalization or valuation have been using Apache Doris for a long time, including Baidu, Meituan, Xiaomi, Jingdong, Bytedance, Tencent, NetEase, Kwai, Sina, 360, Mihoyo, and Ke Holdings. It is also widely used in some traditional industries such as finance, energy, manufacturing, and telecommunications.
Expand Down
Binary file added aazcp.tar.gz
Binary file not shown.
61 changes: 38 additions & 23 deletions be/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,8 @@ message(STATUS "THIRDPARTY_DIR is ${THIRDPARTY_DIR}")

option(MAKE_TEST "ON for make unit test or OFF for not" OFF)
message(STATUS "make test: ${MAKE_TEST}")
option(BUILD_BENCHMARK "ON for make google benchmark or OFF for not" OFF)
message(STATUS "make benchmark: ${BUILD_BENCHMARK}")

option(WITH_MYSQL "Support access MySQL" ON)

Expand Down Expand Up @@ -568,7 +570,7 @@ if (OS_MACOSX)
)
endif()

if (MAKE_TEST)
if (BUILD_BENCHMARK)
set(COMMON_THIRDPARTY
${COMMON_THIRDPARTY}
benchmark
Expand Down Expand Up @@ -708,6 +710,11 @@ if (MAKE_TEST)
endif()
endif ()

# use this to avoid some runtime tracker. reuse BE_TEST symbol, no need another.
if (BUILD_BENCHMARK)
add_definitions(-DBE_TEST)
endif()

get_directory_property(COMPILER_FLAGS COMPILE_OPTIONS)
get_directory_property(COMPILER_DEFINES COMPILE_DEFINITIONS)
message(STATUS "Compiler: ${CMAKE_CXX_COMPILER_ID}-${CMAKE_CXX_COMPILER_VERSION}")
Expand Down Expand Up @@ -754,7 +761,7 @@ add_subdirectory(${SRC_DIR}/http)
add_subdirectory(${SRC_DIR}/io)
add_subdirectory(${SRC_DIR}/olap)
add_subdirectory(${SRC_DIR}/runtime)
add_subdirectory(${SRC_DIR}/service)
add_subdirectory(${SRC_DIR}/service) # this include doris_be
add_subdirectory(${SRC_DIR}/udf)
add_subdirectory(${SRC_DIR}/cloud)

Expand All @@ -772,36 +779,44 @@ add_subdirectory(${SRC_DIR}/util)
add_subdirectory(${SRC_DIR}/vec)
add_subdirectory(${SRC_DIR}/pipeline)

# this include doris_be_test
if (MAKE_TEST)
add_subdirectory(${TEST_DIR})
endif ()

add_subdirectory(${COMMON_SRC_DIR}/cpp ${BUILD_DIR}/src/common_cpp)

# Install be
install(DIRECTORY DESTINATION ${OUTPUT_DIR})
install(DIRECTORY DESTINATION ${OUTPUT_DIR}/bin)
install(DIRECTORY DESTINATION ${OUTPUT_DIR}/conf)

install(FILES
${BASE_DIR}/../bin/start_be.sh
${BASE_DIR}/../bin/stop_be.sh
${BASE_DIR}/../tools/jeprof
PERMISSIONS OWNER_READ OWNER_WRITE OWNER_EXECUTE
GROUP_READ GROUP_WRITE GROUP_EXECUTE
WORLD_READ WORLD_EXECUTE
DESTINATION ${OUTPUT_DIR}/bin)

install(FILES
${BASE_DIR}/../conf/be.conf
${BASE_DIR}/../conf/odbcinst.ini
${BASE_DIR}/../conf/asan_suppr.conf
${BASE_DIR}/../conf/lsan_suppr.conf
DESTINATION ${OUTPUT_DIR}/conf)
if(NOT BUILD_BENCHMARK)
# Install be
install(DIRECTORY DESTINATION ${OUTPUT_DIR})
install(DIRECTORY DESTINATION ${OUTPUT_DIR}/bin)
install(DIRECTORY DESTINATION ${OUTPUT_DIR}/conf)

install(FILES
${BASE_DIR}/../bin/start_be.sh
${BASE_DIR}/../bin/stop_be.sh
${BASE_DIR}/../tools/jeprof
PERMISSIONS OWNER_READ OWNER_WRITE OWNER_EXECUTE
GROUP_READ GROUP_WRITE GROUP_EXECUTE
WORLD_READ WORLD_EXECUTE
DESTINATION ${OUTPUT_DIR}/bin)

install(FILES
${BASE_DIR}/../conf/be.conf
${BASE_DIR}/../conf/odbcinst.ini
${BASE_DIR}/../conf/asan_suppr.conf
${BASE_DIR}/../conf/lsan_suppr.conf
DESTINATION ${OUTPUT_DIR}/conf)
endif()

get_property(dirs DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} PROPERTY INCLUDE_DIRECTORIES)
foreach(dir ${dirs})
message(STATUS "dir='${dir}'")
endforeach()


if (BUILD_BENCHMARK)
add_executable(benchmark_test ${BASE_DIR}/benchmark/benchmark_main.cpp)
target_link_libraries(benchmark_test ${DORIS_LINK_LIBS})
message(STATUS "Add benchmark to build")
install(TARGETS benchmark_test DESTINATION ${OUTPUT_DIR}/lib)
endif()
52 changes: 52 additions & 0 deletions be/benchmark/benchmark_main.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

#include <benchmark/benchmark.h>

#include <string>

#include "vec/columns/column_string.h"
#include "vec/core/block.h"
#include "vec/data_types/data_type.h"
#include "vec/data_types/data_type_string.h"

namespace doris::vectorized { // change if need

static void Example1(benchmark::State& state) {
// init. dont time it.
state.PauseTiming();
Block block;
DataTypePtr str_type = std::make_shared<DataTypeString>();
std::vector<std::string> vals {100, "content"};
state.ResumeTiming();

// do test
for (auto _ : state) {
auto str_col = ColumnString::create();
for (auto& v : vals) {
str_col->insert_data(v.data(), v.size());
}
block.insert({std::move(str_col), str_type, "col"});
benchmark::DoNotOptimize(block); // mark the watched target
}
}
// could BENCHMARK many functions to compare them together.
BENCHMARK(Example1);

} // namespace doris::vectorized

BENCHMARK_MAIN();
6 changes: 3 additions & 3 deletions be/src/agent/cgroup_cpu_ctl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -158,11 +158,11 @@ uint64_t CgroupCpuCtl::cpu_soft_limit_default_value() {
return _is_enable_cgroup_v2_in_env ? 100 : 1024;
}

std::unique_ptr<CgroupCpuCtl> CgroupCpuCtl::create_cgroup_cpu_ctl(uint64_t wg_id) {
std::shared_ptr<CgroupCpuCtl> CgroupCpuCtl::create_cgroup_cpu_ctl(uint64_t wg_id) {
if (_is_enable_cgroup_v2_in_env) {
return std::make_unique<CgroupV2CpuCtl>(wg_id);
return std::make_shared<CgroupV2CpuCtl>(wg_id);
} else if (_is_enable_cgroup_v1_in_env) {
return std::make_unique<CgroupV1CpuCtl>(wg_id);
return std::make_shared<CgroupV1CpuCtl>(wg_id);
}
return nullptr;
}
Expand Down
2 changes: 1 addition & 1 deletion be/src/agent/cgroup_cpu_ctl.h
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class CgroupCpuCtl {

static Status delete_unused_cgroup_path(std::set<uint64_t>& used_wg_ids);

static std::unique_ptr<CgroupCpuCtl> create_cgroup_cpu_ctl(uint64_t wg_id);
static std::shared_ptr<CgroupCpuCtl> create_cgroup_cpu_ctl(uint64_t wg_id);

static bool is_a_valid_cgroup_path(std::string cg_path);

Expand Down
4 changes: 3 additions & 1 deletion be/src/agent/task_worker_pool.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1630,11 +1630,13 @@ void drop_tablet_callback(StorageEngine& engine, const TAgentTaskRequest& req) {
dropped_tablet->tablet_uid());
LOG_INFO("successfully drop tablet")
.tag("signature", req.signature)
.tag("tablet_id", drop_tablet_req.tablet_id);
.tag("tablet_id", drop_tablet_req.tablet_id)
.tag("replica_id", drop_tablet_req.replica_id);
} else {
LOG_WARNING("failed to drop tablet")
.tag("signature", req.signature)
.tag("tablet_id", drop_tablet_req.tablet_id)
.tag("replica_id", drop_tablet_req.replica_id)
.error(status);
}

Expand Down
6 changes: 2 additions & 4 deletions be/src/agent/topic_subscriber.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -40,14 +40,12 @@ void TopicSubscriber::handle_topic_info(const TPublishTopicRequest& topic_reques
// eg, update workload info may delay other listener, then we need add a thread here
// to handle_topic_info asynchronous
std::shared_lock lock(_listener_mtx);
LOG(INFO) << "[topic_publish]begin handle topic info";
for (auto& listener_pair : _registered_listeners) {
if (topic_request.topic_map.find(listener_pair.first) != topic_request.topic_map.end()) {
LOG(INFO) << "[topic_publish]begin handle topic " << listener_pair.first
<< ", size=" << topic_request.topic_map.at(listener_pair.first).size();
listener_pair.second->handle_topic_info(
topic_request.topic_map.at(listener_pair.first));
LOG(INFO) << "[topic_publish]finish handle topic " << listener_pair.first;
LOG(INFO) << "[topic_publish]finish handle topic " << listener_pair.first
<< ", size=" << topic_request.topic_map.at(listener_pair.first).size();
}
}
}
Expand Down
3 changes: 2 additions & 1 deletion be/src/agent/workload_group_listener.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@

#include "agent/workload_group_listener.h"

#include "runtime/exec_env.h"
#include "runtime/workload_group/workload_group.h"
#include "runtime/workload_group/workload_group_manager.h"
#include "util/mem_info.h"
Expand Down Expand Up @@ -59,7 +60,7 @@ void WorkloadGroupListener::handle_topic_info(const std::vector<TopicInfo>& topi
workload_group_info.enable_cpu_hard_limit);

// 4 create and update task scheduler
wg->upsert_task_scheduler(&workload_group_info, _exec_env);
wg->upsert_task_scheduler(&workload_group_info);

// 5 upsert io throttle
wg->upsert_scan_io_throttle(&workload_group_info);
Expand Down
3 changes: 2 additions & 1 deletion be/src/agent/workload_group_listener.h
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,11 @@
#include <glog/logging.h>

#include "agent/topic_listener.h"
#include "runtime/exec_env.h"

namespace doris {

class ExecEnv;

class WorkloadGroupListener : public TopicListener {
public:
~WorkloadGroupListener() {}
Expand Down
5 changes: 5 additions & 0 deletions be/src/cloud/cloud_base_compaction.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ Status CloudBaseCompaction::prepare_compact() {
_input_row_num += rs->num_rows();
_input_segments += rs->num_segments();
_input_rowsets_data_size += rs->data_disk_size();
_input_rowsets_index_size += rs->index_disk_size();
_input_rowsets_total_size += rs->total_disk_size();
}
LOG_INFO("start CloudBaseCompaction, tablet_id={}, range=[{}-{}]", _tablet->tablet_id(),
Expand Down Expand Up @@ -320,6 +321,10 @@ Status CloudBaseCompaction::modify_rowsets() {
compaction_job->add_output_versions(_output_rowset->end_version());
compaction_job->add_txn_id(_output_rowset->txn_id());
compaction_job->add_output_rowset_ids(_output_rowset->rowset_id().to_string());
compaction_job->set_index_size_input_rowsets(_input_rowsets_index_size);
compaction_job->set_segment_size_input_rowsets(_input_rowsets_data_size);
compaction_job->set_index_size_output_rowsets(_output_rowset->index_disk_size());
compaction_job->set_segment_size_output_rowsets(_output_rowset->data_disk_size());

DeleteBitmapPtr output_rowset_delete_bitmap = nullptr;
if (_tablet->keys_type() == KeysType::UNIQUE_KEYS &&
Expand Down
Loading

0 comments on commit 627fcfd

Please sign in to comment.