Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CHYT #224

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions chyt/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#### CHYT powered by ClickHouse
rschu1ze marked this conversation as resolved.
Show resolved Hide resolved

1. Install YTsaurus cluster. Visit [YTsaurus Getting started webpage](https://ytsaurus.tech/docs/en/overview/try-yt)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I understand, the code is open-source, right? Script benchmark.sh ideally does as much setup as possible automatically, i.e. no user intervention. For examples how to do that, please see clickhouse/benchmark.sh, postgresql/benchmark.sh and duckdb/benchmark.sh.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, YTsaurus is open-source system. But CHYT is a small part of it. Benchmark.sh uses pre-installed cluster with default clique to do benchmark test.
YTsaurus cluster can be installed, for example, using k8s operator
All possible variants are described in documentation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned, we need to reduce the variability here ... As someone who wants to verify the benchmark results, I like to run benchmark.sh and have it install everything by itself. The only think that I would be able to choose is the hardware the system runs on.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a possibility to create a demo cluster for everyone who wants to try YTsaurus. Would it be ok to verify the results?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One installation option is Docker. That seems the alternative with the least amount of complexity and the best reproducibility (compared to k8s and the demo cluster).

My preference would be if benchmark.sh sets up the docker container, does other preparations, and then runs the measurements.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I still don't understand what really being measured here.

If the results json files in this PR refer to measurements for a locally setup cluster and different "clique" sizes: In that case, please add deterministic setup instructions (ideally using Docker) to benchmark.sh. Also, the term "serverless" is confusing as it is used in ClickBench for (commercial) database-as-a-service offerings - please remove this term. Please specify the exact machine specs instead (CPU, RAM). Instead of five different measurement sets that were seemingly created using five different "clique" sizes, it would be good to keep it simpler, e.g. two sets of measurements.

If the results json files in this PR refer to measurements for commercial DBaaS offering with different t-shirt sizes, then please describe the needed steps to setup such a cluser in README.md.

Thanks.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deploying a cluster with 360 vCPUs and 720 GB of RAM using a single VM can be quite challenging. In our case, we use a Kubernetes cluster with nodes of the type c6a.8xlarge and network SSDs that perform similarly to gp2 volumes.
We also aim to demonstrate various cluster sizes, not just the smallest one. If necessary, we can remove the "serverless" tag and instead specify the number of CHYT instances in the configuration.
Given the large size of our cluster, Docker deployment was not utilized for benchmarking, as it may yield different results. The easiest way to reproduce our results is by booking a demo cluster through our website.
Additionally, I can include a step-by-step guide in the README.md file to assist with the setup.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, let's add a step by step guide in the README, I'll afterwards try my best to reproduce, then I will merge.

2. Configure CPU, RAM and instance count of your benchmark clique using [CLI](https://ytsaurus.tech/docs/en/user-guide/data-processing/chyt/cliques/start) or YTsaurus UI (Menu -> Cliques).
3. After installation export nessasary parameters
```console
export YT_USE_HOSTS=0
export CHYT_ALIAS=*ch_public
```
In this case we will use default clique ``*ch_public``, but you can create your own. Also you need to export path to proxy using
```console
export YT_PROXY=path to your proxy
```
4. Now you can run benchmark by starting the ``run.sh`` script. It will create```//home/hits``` table, fill it with data from ClickBench dataset repository, sort it and run benchmark queries
29 changes: 29 additions & 0 deletions chyt/benchmark.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
#!/bin/bash

export YT_USE_HOSTS=0
export CHYT_ALIAS=*ch_public

echo "----------------"
# Create table
echo "Creating table"
time yt clickhouse execute "$(cat create.sql)" --alias $CHYT_ALIAS --proxy $YT_PROXY
echo "----------------"

echo "----------------"
# Fill table
echo "Filling table"
time yt clickhouse execute "$(cat fill_data.sql)" --alias $CHYT_ALIAS --proxy $YT_PROXY
echo "----------------"

echo "----------------"
# Sort table
echo "Sorting table"
time yt sort --src //home/hits --dst //home/hits --sort-by "CounterID" --sort-by "EventDate" --sort-by "UserID" --sort-by "EventTime" --sort-by "WatchID" --proxy $YT_PROXY
echo "----------------"

echo "----------------"
# Run benchmark
echo "Starting benchmark"
./run.sh
echo "----------------"

109 changes: 109 additions & 0 deletions chyt/create.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
CREATE TABLE IF NOT EXISTS "//home/hits"
(
WatchID BIGINT NOT NULL,
JavaEnable SMALLINT NOT NULL,
Title TEXT NOT NULL,
GoodEvent SMALLINT NOT NULL,
EventTime TIMESTAMP NOT NULL,
EventDate Date NOT NULL,
CounterID INTEGER NOT NULL,
ClientIP INTEGER NOT NULL,
RegionID INTEGER NOT NULL,
UserID BIGINT NOT NULL,
CounterClass SMALLINT NOT NULL,
OS SMALLINT NOT NULL,
UserAgent SMALLINT NOT NULL,
URL TEXT NOT NULL,
Referer TEXT NOT NULL,
IsRefresh SMALLINT NOT NULL,
RefererCategoryID SMALLINT NOT NULL,
RefererRegionID INTEGER NOT NULL,
URLCategoryID SMALLINT NOT NULL,
URLRegionID INTEGER NOT NULL,
ResolutionWidth SMALLINT NOT NULL,
ResolutionHeight SMALLINT NOT NULL,
ResolutionDepth SMALLINT NOT NULL,
FlashMajor SMALLINT NOT NULL,
FlashMinor SMALLINT NOT NULL,
FlashMinor2 TEXT NOT NULL,
NetMajor SMALLINT NOT NULL,
NetMinor SMALLINT NOT NULL,
UserAgentMajor SMALLINT NOT NULL,
UserAgentMinor VARCHAR(255) NOT NULL,
CookieEnable SMALLINT NOT NULL,
JavascriptEnable SMALLINT NOT NULL,
IsMobile SMALLINT NOT NULL,
MobilePhone SMALLINT NOT NULL,
MobilePhoneModel TEXT NOT NULL,
Params TEXT NOT NULL,
IPNetworkID INTEGER NOT NULL,
TraficSourceID SMALLINT NOT NULL,
SearchEngineID SMALLINT NOT NULL,
SearchPhrase TEXT NOT NULL,
AdvEngineID SMALLINT NOT NULL,
IsArtifical SMALLINT NOT NULL,
WindowClientWidth SMALLINT NOT NULL,
WindowClientHeight SMALLINT NOT NULL,
ClientTimeZone SMALLINT NOT NULL,
ClientEventTime TIMESTAMP NOT NULL,
SilverlightVersion1 SMALLINT NOT NULL,
SilverlightVersion2 SMALLINT NOT NULL,
SilverlightVersion3 INTEGER NOT NULL,
SilverlightVersion4 SMALLINT NOT NULL,
PageCharset TEXT NOT NULL,
CodeVersion INTEGER NOT NULL,
IsLink SMALLINT NOT NULL,
IsDownload SMALLINT NOT NULL,
IsNotBounce SMALLINT NOT NULL,
FUniqID BIGINT NOT NULL,
OriginalURL TEXT NOT NULL,
HID INTEGER NOT NULL,
IsOldCounter SMALLINT NOT NULL,
IsEvent SMALLINT NOT NULL,
IsParameter SMALLINT NOT NULL,
DontCountHits SMALLINT NOT NULL,
WithHash SMALLINT NOT NULL,
HitColor CHAR NOT NULL,
LocalEventTime TIMESTAMP NOT NULL,
Age SMALLINT NOT NULL,
Sex SMALLINT NOT NULL,
Income SMALLINT NOT NULL,
Interests SMALLINT NOT NULL,
Robotness SMALLINT NOT NULL,
RemoteIP INTEGER NOT NULL,
WindowName INTEGER NOT NULL,
OpenerName INTEGER NOT NULL,
HistoryLength SMALLINT NOT NULL,
BrowserLanguage TEXT NOT NULL,
BrowserCountry TEXT NOT NULL,
SocialNetwork TEXT NOT NULL,
SocialAction TEXT NOT NULL,
HTTPError SMALLINT NOT NULL,
SendTiming INTEGER NOT NULL,
DNSTiming INTEGER NOT NULL,
ConnectTiming INTEGER NOT NULL,
ResponseStartTiming INTEGER NOT NULL,
ResponseEndTiming INTEGER NOT NULL,
FetchTiming INTEGER NOT NULL,
SocialSourceNetworkID SMALLINT NOT NULL,
SocialSourcePage TEXT NOT NULL,
ParamPrice BIGINT NOT NULL,
ParamOrderID TEXT NOT NULL,
ParamCurrency TEXT NOT NULL,
ParamCurrencyID SMALLINT NOT NULL,
OpenstatServiceName TEXT NOT NULL,
OpenstatCampaignID TEXT NOT NULL,
OpenstatAdID TEXT NOT NULL,
OpenstatSourceID TEXT NOT NULL,
UTMSource TEXT NOT NULL,
UTMMedium TEXT NOT NULL,
UTMCampaign TEXT NOT NULL,
UTMContent TEXT NOT NULL,
UTMTerm TEXT NOT NULL,
FromTag TEXT NOT NULL,
HasGCLID SMALLINT NOT NULL,
RefererHash BIGINT NOT NULL,
URLHash BIGINT NOT NULL,
CLID INTEGER NOT NULL
)
ENGINE = YtTable();
1 change: 1 addition & 0 deletions chyt/fill_data.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
INSERT INTO "//home/hits" SELECT * FROM url('https://datasets.clickhouse.com/hits_compatible/hits.tsv.gz', 'TSV')
43 changes: 43 additions & 0 deletions chyt/queries.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
SELECT COUNT(*) FROM '//home/hits'
SELECT COUNT(*) FROM '//home/hits' WHERE AdvEngineID != 0
SELECT SUM(AdvEngineID), COUNT(*), AVG(ResolutionWidth) FROM '//home/hits';
SELECT AVG(UserID) FROM '//home/hits';
SELECT COUNT(DISTINCT UserID) FROM '//home/hits';
SELECT COUNT(DISTINCT SearchPhrase) FROM '//home/hits';
SELECT MIN(EventDate), MAX(EventDate) FROM '//home/hits';
SELECT AdvEngineID, COUNT(*) FROM '//home/hits' WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY COUNT(*) DESC;
SELECT RegionID, COUNT(DISTINCT UserID) AS u FROM '//home/hits' GROUP BY RegionID ORDER BY u DESC LIMIT 10;
SELECT RegionID, SUM(AdvEngineID), COUNT(*) AS c, AVG(ResolutionWidth), COUNT(DISTINCT UserID) FROM '//home/hits' GROUP BY RegionID ORDER BY c DESC LIMIT 10;
SELECT MobilePhoneModel, COUNT(DISTINCT UserID) AS u FROM '//home/hits' WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;
SELECT MobilePhone, MobilePhoneModel, COUNT(DISTINCT UserID) AS u FROM '//home/hits' WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;
SELECT SearchPhrase, COUNT(*) AS c FROM '//home/hits' WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM '//home/hits' WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
SELECT SearchEngineID, SearchPhrase, COUNT(*) AS c FROM '//home/hits' WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;
SELECT UserID, COUNT(*) FROM '//home/hits' GROUP BY UserID ORDER BY COUNT(*) DESC LIMIT 10;
SELECT UserID, SearchPhrase, COUNT(*) FROM '//home/hits' GROUP BY UserID, SearchPhrase ORDER BY COUNT(*) DESC LIMIT 10;
SELECT UserID, SearchPhrase, COUNT(*) FROM '//home/hits' GROUP BY UserID, SearchPhrase LIMIT 10;
SELECT UserID, extract(minute FROM EventTime) AS m, SearchPhrase, COUNT(*) FROM '//home/hits' GROUP BY UserID, m, SearchPhrase ORDER BY COUNT(*) DESC LIMIT 10;
SELECT UserID FROM '//home/hits' WHERE UserID = 435090932899640449;
SELECT COUNT(*) FROM '//home/hits' WHERE URL LIKE '%google%';
SELECT SearchPhrase, MIN(URL), COUNT(*) AS c FROM '//home/hits' WHERE URL LIKE '%google%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
SELECT SearchPhrase, MIN(URL), MIN(Title), COUNT(*) AS c, COUNT(DISTINCT UserID) FROM '//home/hits' WHERE Title LIKE '%Google%' AND URL NOT LIKE '%.google.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
SELECT * FROM '//home/hits' WHERE URL LIKE '%google%' ORDER BY EventTime LIMIT 10;
SELECT SearchPhrase FROM '//home/hits' WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;
SELECT SearchPhrase FROM '//home/hits' WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;
SELECT SearchPhrase FROM '//home/hits' WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;
SELECT CounterID, AVG(length(URL)) AS l, COUNT(*) AS c FROM '//home/hits' WHERE URL != '' GROUP BY CounterID HAVING COUNT(*) > 100000 ORDER BY l DESC LIMIT 25;
SELECT REGEXP_REPLACE(Referer, '^https?://(?:www\.)?([^/]+)/.*$', '\1') AS k, AVG(length(Referer)) AS l, COUNT(*) AS c, MIN(Referer) FROM '//home/hits' WHERE Referer != '' GROUP BY k HAVING COUNT(*) > 100000 ORDER BY l DESC LIMIT 25;
SELECT SUM(ResolutionWidth), SUM(ResolutionWidth + 1), SUM(ResolutionWidth + 2), SUM(ResolutionWidth + 3), SUM(ResolutionWidth + 4), SUM(ResolutionWidth + 5), SUM(ResolutionWidth + 6), SUM(ResolutionWidth + 7), SUM(ResolutionWidth + 8), SUM(ResolutionWidth + 9), SUM(ResolutionWidth + 10), SUM(ResolutionWidth + 11), SUM(ResolutionWidth + 12), SUM(ResolutionWidth + 13), SUM(ResolutionWidth + 14), SUM(ResolutionWidth + 15), SUM(ResolutionWidth + 16), SUM(ResolutionWidth + 17), SUM(ResolutionWidth + 18), SUM(ResolutionWidth + 19), SUM(ResolutionWidth + 20), SUM(ResolutionWidth + 21), SUM(ResolutionWidth + 22), SUM(ResolutionWidth + 23), SUM(ResolutionWidth + 24), SUM(ResolutionWidth + 25), SUM(ResolutionWidth + 26), SUM(ResolutionWidth + 27), SUM(ResolutionWidth + 28), SUM(ResolutionWidth + 29), SUM(ResolutionWidth + 30), SUM(ResolutionWidth + 31), SUM(ResolutionWidth + 32), SUM(ResolutionWidth + 33), SUM(ResolutionWidth + 34), SUM(ResolutionWidth + 35), SUM(ResolutionWidth + 36), SUM(ResolutionWidth + 37), SUM(ResolutionWidth + 38), SUM(ResolutionWidth + 39), SUM(ResolutionWidth + 40), SUM(ResolutionWidth + 41), SUM(ResolutionWidth + 42), SUM(ResolutionWidth + 43), SUM(ResolutionWidth + 44), SUM(ResolutionWidth + 45), SUM(ResolutionWidth + 46), SUM(ResolutionWidth + 47), SUM(ResolutionWidth + 48), SUM(ResolutionWidth + 49), SUM(ResolutionWidth + 50), SUM(ResolutionWidth + 51), SUM(ResolutionWidth + 52), SUM(ResolutionWidth + 53), SUM(ResolutionWidth + 54), SUM(ResolutionWidth + 55), SUM(ResolutionWidth + 56), SUM(ResolutionWidth + 57), SUM(ResolutionWidth + 58), SUM(ResolutionWidth + 59), SUM(ResolutionWidth + 60), SUM(ResolutionWidth + 61), SUM(ResolutionWidth + 62), SUM(ResolutionWidth + 63), SUM(ResolutionWidth + 64), SUM(ResolutionWidth + 65), SUM(ResolutionWidth + 66), SUM(ResolutionWidth + 67), SUM(ResolutionWidth + 68), SUM(ResolutionWidth + 69), SUM(ResolutionWidth + 70), SUM(ResolutionWidth + 71), SUM(ResolutionWidth + 72), SUM(ResolutionWidth + 73), SUM(ResolutionWidth + 74), SUM(ResolutionWidth + 75), SUM(ResolutionWidth + 76), SUM(ResolutionWidth + 77), SUM(ResolutionWidth + 78), SUM(ResolutionWidth + 79), SUM(ResolutionWidth + 80), SUM(ResolutionWidth + 81), SUM(ResolutionWidth + 82), SUM(ResolutionWidth + 83), SUM(ResolutionWidth + 84), SUM(ResolutionWidth + 85), SUM(ResolutionWidth + 86), SUM(ResolutionWidth + 87), SUM(ResolutionWidth + 88), SUM(ResolutionWidth + 89) FROM '//home/hits';
SELECT SearchEngineID, ClientIP, COUNT(*) AS c, SUM(IsRefresh), AVG(ResolutionWidth) FROM '//home/hits' WHERE SearchPhrase != '' GROUP BY SearchEngineID, ClientIP ORDER BY c DESC LIMIT 10;
SELECT WatchID, ClientIP, COUNT(*) AS c, SUM(IsRefresh), AVG(ResolutionWidth) FROM '//home/hits' WHERE SearchPhrase != '' GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;
SELECT WatchID, ClientIP, COUNT(*) AS c, SUM(IsRefresh), AVG(ResolutionWidth) FROM '//home/hits' GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;
SELECT URL, COUNT(*) AS c FROM '//home/hits' GROUP BY URL ORDER BY c DESC LIMIT 10;
SELECT 1, URL, COUNT(*) AS c FROM '//home/hits' GROUP BY 1, URL ORDER BY c DESC LIMIT 10;
SELECT ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3, COUNT(*) AS c FROM '//home/hits' GROUP BY ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3 ORDER BY c DESC LIMIT 10;
SELECT URL, COUNT(*) AS PageViews FROM '//home/hits' WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND DontCountHits = 0 AND IsRefresh = 0 AND URL != '' GROUP BY URL ORDER BY PageViews DESC LIMIT 10;
SELECT Title, COUNT(*) AS PageViews FROM '//home/hits' WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND DontCountHits = 0 AND IsRefresh = 0 AND Title != '' GROUP BY Title ORDER BY PageViews DESC LIMIT 10;
SELECT URL, COUNT(*) AS PageViews FROM '//home/hits' WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND IsRefresh = 0 AND IsLink != 0 AND IsDownload = 0 GROUP BY URL ORDER BY PageViews DESC LIMIT 10 OFFSET 1000;
SELECT TraficSourceID, SearchEngineID, AdvEngineID, CASE WHEN (SearchEngineID = 0 AND AdvEngineID = 0) THEN Referer ELSE '' END AS Src, URL AS Dst, COUNT(*) AS PageViews FROM '//home/hits' WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND IsRefresh = 0 GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 10 OFFSET 1000;
SELECT URLHash, EventDate, COUNT(*) AS PageViews FROM '//home/hits' WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND IsRefresh = 0 AND TraficSourceID IN (-1, 6) AND RefererHash = 3594120000172545465 GROUP BY URLHash, EventDate ORDER BY PageViews DESC LIMIT 10 OFFSET 100;
SELECT WindowClientWidth, WindowClientHeight, COUNT(*) AS PageViews FROM '//home/hits' WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND IsRefresh = 0 AND DontCountHits = 0 AND URLHash = 2868770270353813622 GROUP BY WindowClientWidth, WindowClientHeight ORDER BY PageViews DESC LIMIT 10 OFFSET 10000;
SELECT DATE_TRUNC('minute', EventTime) AS M, COUNT(*) AS PageViews FROM '//home/hits' WHERE CounterID = 62 AND EventDate >= '2013-07-14' AND EventDate <= '2013-07-15' AND IsRefresh = 0 AND DontCountHits = 0 GROUP BY DATE_TRUNC('minute', EventTime) ORDER BY DATE_TRUNC('minute', EventTime) LIMIT 10 OFFSET 1000;
60 changes: 60 additions & 0 deletions chyt/results/yt.192GB_YC.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@

{
"system": "CHYT",
"date": "2024-09-16",
"machine": "192GB",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am confused. L. 6 says "serverless" which typically means the results were measured in a database-as-a-service offering (such as ClickHouse Cloud). Was that the case?

If not, it would be good to specify the exact machine specs for reproducibility, see e.g. duckdb/results/c5.4xlarge.json.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For CHYT with 48, 96 and 192 GB we use 1, 2 and 4 instances with 12 vCPU and 48 Gb RAM
For CHYT with 360 and 720 GB -- 9 and 18 instances with 10 vCPU and 40 Gb RAM

You only configure the count and size of instances and then YTsaurus will schedule them across computational nodes of cluster.

"cluster_size": "serverless",
"comment": "",

"tags": ["C++", "column-oriented", "ClickHouse derivative", "managed", "YT"],

"load_time": 0,
"data_size": 11991598975,

"result": [[2.372610368, 0.177555991, 0.167831612],
[1.89569562, 0.163561879, 0.125731783],
[0.397375625, 0.143236637, 0.142898217],
[0.159063975, 0.219494644, 0.142977358],
[2.445798601, 2.463385924, 2.45236644],
[1.642306904, 1.120649371, 1.186680465],
[0.145678347, 0.134829491, 0.111869894],
[0.137467588, 0.131913797, 0.121479928],
[1.038950099, 0.841041871, 0.815657075],
[0.888393242, 0.901503487, 0.850001405],
[0.300008495, 0.295253981, 0.262494617],
[0.332864706, 0.316992408, 0.284549721],
[1.282832704, 1.211279681, 1.203357621],
[1.870151975, 1.879827686, 2.061536338],
[1.464882239, 1.392805612, 1.37117606],
[0.669433287, 0.62684469, 0.609748707],
[2.725962414, 2.440456367, 2.563546639],
[0.697352241, 0.716481149, 0.71570874],
[3.955110766, 3.943280412, 3.893673244],
[0.184934938, 0.148157406, 0.12426291],
[2.520929897, 0.419806388, 0.429551375],
[0.444997613, 0.434422441, 0.429956294],
[1.996560203, 0.628015168, 0.585114861],
[14.824231605, 2.397589198, 2.2803815],
[0.252897684, 0.257396369, 0.25015462],
[0.213346949, 0.223027753, 0.22175948],
[0.232342357, 0.225393963, 0.231525915],
[0.619598816, 0.589334994, 0.60943],
[4.611859865, 4.729652665, 4.405050704],
[0.50550661, 0.520710299, 0.492502187],
[0.640109449, 0.626248477, 0.616935419],
[1.158579925, 1.048573371, 1.122588651],
[7.386813284, 5.736833249, 6.593178508],
[6.643162826, 6.72984338, 6.644774073],
[6.720191035, 6.74018319, 6.750590849],
[1.113060311, 1.08857216, 1.104773142],
[1.866018689, 0.26290965, 0.248918084],
[0.299527902, 0.115307374, 0.115341251],
[0.132207052, 0.138025382, 0.119330326],
[0.995906952, 0.355747083, 0.352949324],
[1.127008209, 0.128025554, 0.097888539],
[0.211839809, 0.099082926, 0.096308393],
[0.12547587, 0.100394788, 0.078670296]
]

}

60 changes: 60 additions & 0 deletions chyt/results/yt.360GB_YC.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@

{
"system": "CHYT",
"date": "2024-09-16",
"machine": "360GB",
"cluster_size": "serverless",
"comment": "",

"tags": ["C++", "column-oriented", "ClickHouse derivative", "managed", "YT"],

"load_time": 0,
"data_size": 11991598975,

"result": [[0.128183824, 0.094088621, 0.09780898],
[0.110835469, 0.105759888, 0.094570234],
[0.129915728, 0.102583752, 0.103870001],
[0.103502534, 0.103226616, 0.103748437],
[2.109089084, 2.083779273, 2.112884608],
[1.020183734, 0.973862835, 0.940307535],
[0.096076448, 0.101914965, 0.096355669],
[0.102592734, 0.101062682, 0.102341893],
[0.805407437, 0.786196311, 0.778091253],
[0.827499106, 0.799945852, 0.790301742],
[0.23059338, 0.222442364, 0.212345667],
[0.235147388, 0.227044594, 0.224793301],
[0.770579808, 0.760533881, 0.778474308],
[1.159523393, 1.14649681, 1.134973884],
[0.853472293, 0.836768906, 0.809505901],
[0.473811343, 0.526539064, 0.555403265],
[1.706818407, 1.69311605, 1.563099254],
[0.414241724, 0.43466755, 0.432693477],
[3.420935664, 3.57643173, 3.385704175],
[0.099313286, 0.118718491, 0.108467089],
[0.293844253, 0.275250081, 0.270439116],
[0.312499051, 0.300432862, 0.284370244],
[0.367348574, 0.34577317, 0.353885312],
[1.496499058, 1.432554378, 1.35307467],
[0.18977799, 0.159872566, 0.160621156],
[0.158556102, 0.1485595, 0.149953303],
[0.155616606, 0.160443596, 0.164656278],
[0.48669893, 0.462240426, 0.447592216],
[2.609391381, 2.654156135, 2.628059019],
[0.333486019, 0.290188037, 0.298685464],
[0.386230763, 0.361219358, 0.397969611],
[0.692538712, 0.680270727, 0.650636814],
[4.357208473, 4.367277664, 4.278229464],
[4.045840235, 4.137789511, 4.212205446],
[4.107636932, 4.007328571, 4.128450318],
[0.931051726, 0.889028639, 0.896324353],
[1.175944096, 0.185351735, 0.159400516],
[0.315863193, 0.10706978, 0.097494635],
[0.10664377, 0.092488926, 0.090972588],
[0.561070017, 0.289356023, 0.290414027],
[0.497854143, 0.095818995, 0.09152957],
[0.11312576, 0.082238359, 0.081518631],
[0.082093051, 0.063105245, 0.072934379]
]

}

Loading