From 1b8b1f1fdf05603993710b57c7506057d92ff21c Mon Sep 17 00:00:00 2001 From: GitHub Actions Date: Thu, 9 Jan 2025 15:49:15 +0000 Subject: [PATCH] site deploy Auto-generated via `{sandpaper}` Source : f6ae925e653d8d848e7d61b30bebcfa5a84d5e45 Branch : md-outputs Author : GitHub Actions Time : 2025-01-09 15:49:01 +0000 Message : markdown source builds Auto-generated via `{sandpaper}` Source : 4020e68bc68dafe5732cc9396e0906aeb7415464 Branch : main Author : Chris Endemann Time : 2025-01-09 15:48:18 +0000 Message : Update Training-models-in-SageMaker-notebooks.md --- Training-models-in-SageMaker-notebooks.html | 22 +++++++++++-------- aio.html | 22 ++++++++++++------- ...raining-models-in-SageMaker-notebooks.html | 22 +++++++++++-------- instructor/aio.html | 22 ++++++++++++------- md5sum.txt | 2 +- pkgdown.yml | 2 +- 6 files changed, 56 insertions(+), 36 deletions(-) diff --git a/Training-models-in-SageMaker-notebooks.html b/Training-models-in-SageMaker-notebooks.html index 05cf1de..62153fc 100644 --- a/Training-models-in-SageMaker-notebooks.html +++ b/Training-models-in-SageMaker-notebooks.html @@ -1414,17 +1414,21 @@

Cost of distributed computing

-

In summary: - Start by upgrading to a more powerful instance -(Option 1) for datasets up to 10 GB and moderately complex -models. A single, more powerful, instance is usually more cost-effective -for smaller workloads and where time isn’t critical. Running initial -tests with a single instance can also provide a benchmark. You can then +

In summary:

+
  • +Start by upgrading to a more powerful instance (Option +1) for datasets up to 10 GB and moderately complex models. A +single, more powerful, instance is usually more cost-effective for +smaller workloads and where time isn’t critical. Running initial tests +with a single instance can also provide a benchmark. You can then experiment with small increases in instance count to find a balance between cost and time savings, particularly considering communication -overheads that affect parallel efficiency. - Consider -distributed training across multiple instances (Option 2) only -when dataset size, model complexity, or training time demand it.

    -
+overheads that affect parallel efficiency. +
  • +Consider distributed training across multiple instances +(Option 2) only when dataset size, model complexity, or +training time demand it.
  • +

    XGBoost’s distributed training mechanism

    diff --git a/aio.html b/aio.html index 5ddc786..d1b473e 100644 --- a/aio.html +++ b/aio.html @@ -3467,16 +3467,22 @@

    Cost of distributed computing

    -

    In summary: - Start by upgrading to a more powerful instance -(Option 1) for datasets up to 10 GB and moderately complex -models. A single, more powerful, instance is usually more cost-effective -for smaller workloads and where time isn’t critical. Running initial -tests with a single instance can also provide a benchmark. You can then +

    In summary:

    +
      +
    • +Start by upgrading to a more powerful instance (Option +1) for datasets up to 10 GB and moderately complex models. A +single, more powerful, instance is usually more cost-effective for +smaller workloads and where time isn’t critical. Running initial tests +with a single instance can also provide a benchmark. You can then experiment with small increases in instance count to find a balance between cost and time savings, particularly considering communication -overheads that affect parallel efficiency. - Consider -distributed training across multiple instances (Option 2) only -when dataset size, model complexity, or training time demand it.

      +overheads that affect parallel efficiency.
    • +
    • +Consider distributed training across multiple instances +(Option 2) only when dataset size, model complexity, or +training time demand it.
    • +
    diff --git a/instructor/Training-models-in-SageMaker-notebooks.html b/instructor/Training-models-in-SageMaker-notebooks.html index 2e42524..3ea086b 100644 --- a/instructor/Training-models-in-SageMaker-notebooks.html +++ b/instructor/Training-models-in-SageMaker-notebooks.html @@ -1416,17 +1416,21 @@

    Cost of distributed computing

    -

    In summary: - Start by upgrading to a more powerful instance -(Option 1) for datasets up to 10 GB and moderately complex -models. A single, more powerful, instance is usually more cost-effective -for smaller workloads and where time isn’t critical. Running initial -tests with a single instance can also provide a benchmark. You can then +

    In summary:

    +
    • +Start by upgrading to a more powerful instance (Option +1) for datasets up to 10 GB and moderately complex models. A +single, more powerful, instance is usually more cost-effective for +smaller workloads and where time isn’t critical. Running initial tests +with a single instance can also provide a benchmark. You can then experiment with small increases in instance count to find a balance between cost and time savings, particularly considering communication -overheads that affect parallel efficiency. - Consider -distributed training across multiple instances (Option 2) only -when dataset size, model complexity, or training time demand it.

      -
    +overheads that affect parallel efficiency. +
  • +Consider distributed training across multiple instances +(Option 2) only when dataset size, model complexity, or +training time demand it.
  • +

    XGBoost’s distributed training mechanism

    diff --git a/instructor/aio.html b/instructor/aio.html index c666506..bb70a6e 100644 --- a/instructor/aio.html +++ b/instructor/aio.html @@ -3475,16 +3475,22 @@

    Cost of distributed computing

    -

    In summary: - Start by upgrading to a more powerful instance -(Option 1) for datasets up to 10 GB and moderately complex -models. A single, more powerful, instance is usually more cost-effective -for smaller workloads and where time isn’t critical. Running initial -tests with a single instance can also provide a benchmark. You can then +

    In summary:

    +
      +
    • +Start by upgrading to a more powerful instance (Option +1) for datasets up to 10 GB and moderately complex models. A +single, more powerful, instance is usually more cost-effective for +smaller workloads and where time isn’t critical. Running initial tests +with a single instance can also provide a benchmark. You can then experiment with small increases in instance count to find a balance between cost and time savings, particularly considering communication -overheads that affect parallel efficiency. - Consider -distributed training across multiple instances (Option 2) only -when dataset size, model complexity, or training time demand it.

      +overheads that affect parallel efficiency.
    • +
    • +Consider distributed training across multiple instances +(Option 2) only when dataset size, model complexity, or +training time demand it.
    • +
    diff --git a/md5sum.txt b/md5sum.txt index ee48418..35c145e 100644 --- a/md5sum.txt +++ b/md5sum.txt @@ -9,7 +9,7 @@ "episodes/SageMaker-notebooks-as-controllers.md" "7b44f533d49559aa691b8ab2574b4e81" "site/built/SageMaker-notebooks-as-controllers.md" "2024-11-06" "episodes/Accessing-S3-via-SageMaker-notebooks.md" "65e591a493b3bba8fdcfa29a7d00dd13" "site/built/Accessing-S3-via-SageMaker-notebooks.md" "2024-11-14" "episodes/Interacting-with-code-repo.md" "105dace64e3a1ea6570d314e4b3ccfff" "site/built/Interacting-with-code-repo.md" "2024-11-06" -"episodes/Training-models-in-SageMaker-notebooks.md" "29cc9de0af426d24af5d7245bc46fe51" "site/built/Training-models-in-SageMaker-notebooks.md" "2025-01-09" +"episodes/Training-models-in-SageMaker-notebooks.md" "d809d6b042758c9e56c85dc57f124f88" "site/built/Training-models-in-SageMaker-notebooks.md" "2025-01-09" "episodes/Training-models-in-SageMaker-notebooks-part2.md" "35107ac2e6cb99307714b0f25b2576c4" "site/built/Training-models-in-SageMaker-notebooks-part2.md" "2024-11-07" "episodes/Hyperparameter-tuning.md" "c9fe9c20d437dc2f88315438ac6460db" "site/built/Hyperparameter-tuning.md" "2024-11-07" "episodes/Resource-management-cleanup.md" "bb9671676d8d86679b598531c2e294b0" "site/built/Resource-management-cleanup.md" "2024-11-08" diff --git a/pkgdown.yml b/pkgdown.yml index 10f4311..1a4c40c 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -2,4 +2,4 @@ pandoc: 3.1.11 pkgdown: 2.1.1 pkgdown_sha: ~ articles: {} -last_built: 2025-01-09T15:48Z +last_built: 2025-01-09T15:49Z