From 1b8b1f1fdf05603993710b57c7506057d92ff21c Mon Sep 17 00:00:00 2001
From: GitHub Actions In summary: - Start by upgrading to a more powerful instance
-(Option 1) for datasets up to 10 GB and moderately complex
-models. A single, more powerful, instance is usually more cost-effective
-for smaller workloads and where time isn’t critical. Running initial
-tests with a single instance can also provide a benchmark. You can then
+ In summary:Cost of distributed computing
-
-diff --git a/instructor/Training-models-in-SageMaker-notebooks.html b/instructor/Training-models-in-SageMaker-notebooks.html index 2e42524..3ea086b 100644 --- a/instructor/Training-models-in-SageMaker-notebooks.html +++ b/instructor/Training-models-in-SageMaker-notebooks.html @@ -1416,17 +1416,21 @@In summary: - Start by upgrading to a more powerful instance -(Option 1) for datasets up to 10 GB and moderately complex -models. A single, more powerful, instance is usually more cost-effective -for smaller workloads and where time isn’t critical. Running initial -tests with a single instance can also provide a benchmark. You can then +
In summary:
++
- +Start by upgrading to a more powerful instance (Option +1) for datasets up to 10 GB and moderately complex models. A +single, more powerful, instance is usually more cost-effective for +smaller workloads and where time isn’t critical. Running initial tests +with a single instance can also provide a benchmark. You can then experiment with small increases in instance count to find a balance between cost and time savings, particularly considering communication -overheads that affect parallel efficiency. - Consider -distributed training across multiple instances (Option 2) only -when dataset size, model complexity, or training time demand it. +overheads that affect parallel efficiency.
+- +Consider distributed training across multiple instances +(Option 2) only when dataset size, model complexity, or +training time demand it.
+
-+overheads that affect parallel efficiency. +In summary: - Start by upgrading to a more powerful instance -(Option 1) for datasets up to 10 GB and moderately complex -models. A single, more powerful, instance is usually more cost-effective -for smaller workloads and where time isn’t critical. Running initial -tests with a single instance can also provide a benchmark. You can then +
In summary:
+
- +Start by upgrading to a more powerful instance (Option +1) for datasets up to 10 GB and moderately complex models. A +single, more powerful, instance is usually more cost-effective for +smaller workloads and where time isn’t critical. Running initial tests +with a single instance can also provide a benchmark. You can then experiment with small increases in instance count to find a balance between cost and time savings, particularly considering communication -overheads that affect parallel efficiency. - Consider -distributed training across multiple instances (Option 2) only -when dataset size, model complexity, or training time demand it. -
-diff --git a/md5sum.txt b/md5sum.txt index ee48418..35c145e 100644 --- a/md5sum.txt +++ b/md5sum.txt @@ -9,7 +9,7 @@ "episodes/SageMaker-notebooks-as-controllers.md" "7b44f533d49559aa691b8ab2574b4e81" "site/built/SageMaker-notebooks-as-controllers.md" "2024-11-06" "episodes/Accessing-S3-via-SageMaker-notebooks.md" "65e591a493b3bba8fdcfa29a7d00dd13" "site/built/Accessing-S3-via-SageMaker-notebooks.md" "2024-11-14" "episodes/Interacting-with-code-repo.md" "105dace64e3a1ea6570d314e4b3ccfff" "site/built/Interacting-with-code-repo.md" "2024-11-06" -"episodes/Training-models-in-SageMaker-notebooks.md" "29cc9de0af426d24af5d7245bc46fe51" "site/built/Training-models-in-SageMaker-notebooks.md" "2025-01-09" +"episodes/Training-models-in-SageMaker-notebooks.md" "d809d6b042758c9e56c85dc57f124f88" "site/built/Training-models-in-SageMaker-notebooks.md" "2025-01-09" "episodes/Training-models-in-SageMaker-notebooks-part2.md" "35107ac2e6cb99307714b0f25b2576c4" "site/built/Training-models-in-SageMaker-notebooks-part2.md" "2024-11-07" "episodes/Hyperparameter-tuning.md" "c9fe9c20d437dc2f88315438ac6460db" "site/built/Hyperparameter-tuning.md" "2024-11-07" "episodes/Resource-management-cleanup.md" "bb9671676d8d86679b598531c2e294b0" "site/built/Resource-management-cleanup.md" "2024-11-08" diff --git a/pkgdown.yml b/pkgdown.yml index 10f4311..1a4c40c 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -2,4 +2,4 @@ pandoc: 3.1.11 pkgdown: 2.1.1 pkgdown_sha: ~ articles: {} -last_built: 2025-01-09T15:48Z +last_built: 2025-01-09T15:49ZIn summary: - Start by upgrading to a more powerful instance -(Option 1) for datasets up to 10 GB and moderately complex -models. A single, more powerful, instance is usually more cost-effective -for smaller workloads and where time isn’t critical. Running initial -tests with a single instance can also provide a benchmark. You can then +
In summary:
++
- +Start by upgrading to a more powerful instance (Option +1) for datasets up to 10 GB and moderately complex models. A +single, more powerful, instance is usually more cost-effective for +smaller workloads and where time isn’t critical. Running initial tests +with a single instance can also provide a benchmark. You can then experiment with small increases in instance count to find a balance between cost and time savings, particularly considering communication -overheads that affect parallel efficiency. - Consider -distributed training across multiple instances (Option 2) only -when dataset size, model complexity, or training time demand it. +overheads that affect parallel efficiency.
+- +Consider distributed training across multiple instances +(Option 2) only when dataset size, model complexity, or +training time demand it.
+