Skip to content

Commit 4ed2ab7

Browse files
committed
Rebuild
1 parent 6af07fd commit 4ed2ab7

File tree

194 files changed

+5870
-9957
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

194 files changed

+5870
-9957
lines changed

โ€Ždocs/_downloads/032d653a4f5a9c1ec32b9fc7c989ffe1/seq2seq_translation_tutorial.ipynb

Lines changed: 4 additions & 4 deletions
Large diffs are not rendered by default.

โ€Ždocs/_downloads/03a48646520c277662581e858e680809/model_parallel_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\nSingle-Machine Model Parallel Best Practices\n================================\n**Author**: `Shen Li <https://mrshenli.github.io/>`_\n\nModel parallel is widely-used in distributed training\ntechniques. Previous posts have explained how to use\n`DataParallel <https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html>`_\nto train a neural network on multiple GPUs; this feature replicates the\nsame model to all GPUs, where each GPU consumes a different partition of the\ninput data. Although it can significantly accelerate the training process, it\ndoes not work for some use cases where the model is too large to fit into a\nsingle GPU. This post shows how to solve that problem by using **model parallel**,\nwhich, in contrast to ``DataParallel``, splits a single model onto different GPUs,\nrather than replicating the entire model on each GPU (to be concrete, say a model\n``m`` contains 10 layers: when using ``DataParallel``, each GPU will have a\nreplica of each of these 10 layers, whereas when using model parallel on two GPUs,\neach GPU could host 5 layers).\n\nThe high-level idea of model parallel is to place different sub-networks of a\nmodel onto different devices, and implement the ``forward`` method accordingly\nto move intermediate outputs across devices. As only part of a model operates\non any individual device, a set of devices can collectively serve a larger\nmodel. In this post, we will not try to construct huge models and squeeze them\ninto a limited number of GPUs. Instead, this post focuses on showing the idea\nof model parallel. It is up to the readers to apply the ideas to real-world\napplications.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>For distributed model parallel training where a model spans multiple\n servers, please refer to\n `Getting Started With Distributed RPC Framework <rpc_tutorial.html>`__\n for examples and details.</p></div>\n\nBasic Usage\n-----------\n\n"
18+
"\nSingle-Machine Model Parallel Best Practices\n================================\n**Author**: `Shen Li <https://mrshenli.github.io/>`_\n\nModel parallel is widely-used in distributed training\ntechniques. Previous posts have explained how to use\n`DataParallel <https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html>`_\nto train a neural network on multiple GPUs; this feature replicates the\nsame model to all GPUs, where each GPU consumes a different partition of the\ninput data. Although it can significantly accelerate the training process, it\ndoes not work for some use cases where the model is too large to fit into a\nsingle GPU. This post shows how to solve that problem by using **model parallel**,\nwhich, in contrast to ``DataParallel``, splits a single model onto different GPUs,\nrather than replicating the entire model on each GPU (to be concrete, say a model\n``m`` contains 10 layers: when using ``DataParallel``, each GPU will have a\nreplica of each of these 10 layers, whereas when using model parallel on two GPUs,\neach GPU could host 5 layers).\n\nThe high-level idea of model parallel is to place different sub-networks of a\nmodel onto different devices, and implement the ``forward`` method accordingly\nto move intermediate outputs across devices. As only part of a model operates\non any individual device, a set of devices can collectively serve a larger\nmodel. In this post, we will not try to construct huge models and squeeze them\ninto a limited number of GPUs. Instead, this post focuses on showing the idea\nof model parallel. It is up to the readers to apply the ideas to real-world\napplications.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>For distributed model parallel training where a model spans multiple\n servers, please refer to\n `Getting Started With Distributed RPC Framework <rpc_tutorial.html>`__\n for examples and details.</p></div>\n\nBasic Usage\n-----------\n"
1919
]
2020
},
2121
{
@@ -175,7 +175,7 @@
175175
"name": "python",
176176
"nbconvert_exporter": "python",
177177
"pygments_lexer": "ipython3",
178-
"version": "3.6.7"
178+
"version": "3.7.4"
179179
}
180180
},
181181
"nbformat": 4,

โ€Ždocs/_downloads/0abb91e66579f9acfdfbf93bc4b69955/named_tensor_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n(experimental) Introduction to Named Tensors in PyTorch\n*******************************************************\n**Author**: `Richard Zou <https://github.com/zou3519>`_\n\nNamed Tensors aim to make tensors easier to use by allowing users to associate\nexplicit names with tensor dimensions. In most cases, operations that take\ndimension parameters will accept dimension names, avoiding the need to track\ndimensions by position. In addition, named tensors use names to automatically\ncheck that APIs are being used correctly at runtime, providing extra safety.\nNames can also be used to rearrange dimensions, for example, to support\n\"broadcasting by name\" rather than \"broadcasting by position\".\n\nThis tutorial is intended as a guide to the functionality that will\nbe included with the 1.3 launch. By the end of it, you will be able to:\n\n- Create Tensors with named dimensions, as well as remove or rename those\n dimensions\n- Understand the basics of how operations propagate dimension names\n- See how naming dimensions enables clearer code in two key areas:\n - Broadcasting operations\n - Flattening and unflattening dimensions\n\nFinally, we'll put this into practice by writing a multi-head attention module\nusing named tensors.\n\nNamed tensors in PyTorch are inspired by and done in collaboration with\n`Sasha Rush <https://tech.cornell.edu/people/alexander-rush/>`_.\nSasha proposed the original idea and proof of concept in his\n`January 2019 blog post <http://nlp.seas.harvard.edu/NamedTensor>`_.\n\nBasics: named dimensions\n========================\n\nPyTorch now allows Tensors to have named dimensions; factory functions\ntake a new `names` argument that associates a name with each dimension.\nThis works with most factory functions, such as\n\n- `tensor`\n- `empty`\n- `ones`\n- `zeros`\n- `randn`\n- `rand`\n\nHere we construct a tensor with names:\n\n"
18+
"\n(experimental) Introduction to Named Tensors in PyTorch\n*******************************************************\n**Author**: `Richard Zou <https://github.com/zou3519>`_\n\nNamed Tensors aim to make tensors easier to use by allowing users to associate\nexplicit names with tensor dimensions. In most cases, operations that take\ndimension parameters will accept dimension names, avoiding the need to track\ndimensions by position. In addition, named tensors use names to automatically\ncheck that APIs are being used correctly at runtime, providing extra safety.\nNames can also be used to rearrange dimensions, for example, to support\n\"broadcasting by name\" rather than \"broadcasting by position\".\n\nThis tutorial is intended as a guide to the functionality that will\nbe included with the 1.3 launch. By the end of it, you will be able to:\n\n- Create Tensors with named dimensions, as well as remove or rename those\n dimensions\n- Understand the basics of how operations propagate dimension names\n- See how naming dimensions enables clearer code in two key areas:\n - Broadcasting operations\n - Flattening and unflattening dimensions\n\nFinally, we'll put this into practice by writing a multi-head attention module\nusing named tensors.\n\nNamed tensors in PyTorch are inspired by and done in collaboration with\n`Sasha Rush <https://tech.cornell.edu/people/alexander-rush/>`_.\nSasha proposed the original idea and proof of concept in his\n`January 2019 blog post <http://nlp.seas.harvard.edu/NamedTensor>`_.\n\nBasics: named dimensions\n========================\n\nPyTorch now allows Tensors to have named dimensions; factory functions\ntake a new `names` argument that associates a name with each dimension.\nThis works with most factory functions, such as\n\n- `tensor`\n- `empty`\n- `ones`\n- `zeros`\n- `randn`\n- `rand`\n\nHere we construct a tensor with names:\n"
1919
]
2020
},
2121
{
@@ -503,7 +503,7 @@
503503
"name": "python",
504504
"nbconvert_exporter": "python",
505505
"pygments_lexer": "ipython3",
506-
"version": "3.6.7"
506+
"version": "3.7.4"
507507
}
508508
},
509509
"nbformat": 4,

โ€Ždocs/_downloads/13b143c2380f4768d9432d808ad50799/char_rnn_classification_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n\uae30\ucd08\ubd80\ud130 \uc2dc\uc791\ud558\ub294 NLP: \ubb38\uc790-\ub2e8\uc704 RNN\uc73c\ub85c \uc774\ub984 \ubd84\ub958\ud558\uae30\n********************************************************************************\n**Author**: `Sean Robertson <https://github.com/spro/practical-pytorch>`_\n **\ubc88\uc5ed**: `\ud669\uc131\uc218 <https://github.com/adonisues>`_\n\n\n\ub2e8\uc5b4\ub97c \ubd84\ub958\ud558\uae30 \uc704\ud574 \uae30\ucd08\uc801\uc778 \ubb38\uc790-\ub2e8\uc704 RNN\uc744 \uad6c\ucd95\ud558\uace0 \ud559\uc2b5 \ud560 \uc608\uc815\uc785\ub2c8\ub2e4.\n\uc774 \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c\ub294 (\uc774\ud6c4 2\uac1c \ud29c\ud1a0\ub9ac\uc5bc\uacfc \ud568\uaed8) NLP \ubaa8\ub378\ub9c1\uc744 \uc704\ud55c \ub370\uc774\ud130 \uc804\ucc98\ub9ac\ub97c \n`torchtext` \uc758 \ud3b8\ub9ac\ud55c \ub9ce\uc740 \uae30\ub2a5\ub4e4\uc744 \uc0ac\uc6a9\ud558\uc9c0 \uc54a\uace0 \uc5b4\ub5bb\uac8c \ud558\ub294\uc9c0 \"\uae30\ucd08\ubd80\ud130(from scratch)\" \n\ubcf4\uc5ec\uc8fc\uae30 \ub584\ubb38\uc5d0 NLP \ubaa8\ub378\ub9c1\uc744 \uc704\ud55c \uc804\ucc98\ub9ac\uac00 \uc800\uc218\uc900\uc5d0\uc11c \uc5b4\ub5bb\uac8c \uc9c4\ud589\ub418\ub294\uc9c0\ub97c \uc54c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\ubb38\uc790-\ub2e8\uc704 RNN\uc740 \ub2e8\uc5b4\ub97c \ubb38\uc790\uc758 \uc5f0\uc18d\uc73c\ub85c \uc77d\uc5b4 \ub4e4\uc5ec\uc11c \uac01 \ub2e8\uacc4\uc758 \uc608\uce21\uacfc \n\"\uc740\ub2c9 \uc0c1\ud0dc(Hidden State)\" \ucd9c\ub825\ud558\uace0, \ub2e4\uc74c \ub2e8\uacc4\uc5d0 \uc774\uc804 \uc740\ub2c9 \uc0c1\ud0dc\ub97c \uc804\ub2ec\ud569\ub2c8\ub2e4. \n\ub2e8\uc5b4\uac00 \uc18d\ud55c \ud074\ub798\uc2a4\ub85c \ucd9c\ub825\uc774 \ub418\ub3c4\ub85d \ucd5c\uc885 \uc608\uce21\uc73c\ub85c \uc120\ud0dd\ud569\ub2c8\ub2e4.\n\n\uad6c\uccb4\uc801\uc73c\ub85c, 18\uac1c \uc5b8\uc5b4\ub85c \ub41c \uc218\ucc9c \uac1c\uc758 \uc131(\u59d3)\uc744 \ud6c8\ub828\uc2dc\ud0a4\uace0, \n\ucca0\uc790\uc5d0 \ub530\ub77c \uc774\ub984\uc774 \uc5b4\ub5a4 \uc5b8\uc5b4\uc778\uc9c0 \uc608\uce21\ud569\ub2c8\ub2e4:\n\n::\n\n $ python predict.py Hinton\n (-0.47) Scottish\n (-1.52) English\n (-3.57) Irish\n\n $ python predict.py Schmidhuber\n (-0.19) German\n (-2.48) Czech\n (-2.68) Dutch\n\n\n**\ucd94\ucc9c \uc790\ub8cc:**\n\nPytorch\ub97c \uc124\uce58\ud588\uace0, Python\uc744 \uc54c\uace0, Tensor\ub97c \uc774\ud574\ud55c\ub2e4\uace0 \uac00\uc815\ud569\ub2c8\ub2e4:\n\n- https://pytorch.org/ \uc124\uce58 \uc548\ub0b4\n- :doc:`/beginner/deep_learning_60min_blitz` PyTorch \uc2dc\uc791\ud558\uae30\n- :doc:`/beginner/pytorch_with_examples` \ub113\uace0 \uae4a\uc740 \ud1b5\ucc30\uc744 \uc704\ud55c \uc790\ub8cc\n- :doc:`/beginner/former_torchies_tutorial` \uc774\uc804 Lua Torch \uc0ac\uc6a9\uc790\ub97c \uc704\ud55c \uc790\ub8cc\n\nRNN\uacfc \uc791\ub3d9 \ubc29\uc2dd\uc744 \uc544\ub294 \uac83 \ub610\ud55c \uc720\uc6a9\ud569\ub2c8\ub2e4:\n\n- `The Unreasonable Effectiveness of Recurrent Neural\n Networks <https://karpathy.github.io/2015/05/21/rnn-effectiveness/>`__\n \uc2e4\uc0dd\ud65c \uc608\uc81c\ub97c \ubcf4\uc5ec \uc90d\ub2c8\ub2e4.\n- `Understanding LSTM\n Networks <https://colah.github.io/posts/2015-08-Understanding-LSTMs/>`__\n LSTM\uc5d0 \uad00\ud55c \uac83\uc774\uc9c0\ub9cc RNN\uc5d0 \uad00\ud574\uc11c\ub3c4 \uc720\uc775\ud569\ub2c8\ub2e4.\n\n\ub370\uc774\ud130 \uc900\ube44\n==================\n\n.. NOTE::\n `\uc5ec\uae30 <https://download.pytorch.org/tutorial/data.zip>`__ \uc5d0\uc11c \ub370\uc774\ud130\ub97c \ub2e4\uc6b4 \ubc1b\uace0,\n \ud604\uc7ac \ub514\ub809\ud1a0\ub9ac\uc5d0 \uc555\ucd95\uc744 \ud478\uc2ed\uc2dc\uc624.\n\n``data/names`` \ub514\ub809\ud1a0\ub9ac\uc5d0\ub294 \"[Language].txt\" \ub77c\ub294 18 \uac1c\uc758 \ud14d\uc2a4\ud2b8 \ud30c\uc77c\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\uac01 \ud30c\uc77c\uc5d0\ub294 \ud55c \uc904\uc5d0 \ud558\ub098\uc758 \uc774\ub984\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc73c\uba70 \ub300\ubd80\ubd84 \ub85c\ub9c8\uc790\ub85c \ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4\n(\uadf8\ub7ec\ub098, \uc720\ub2c8\ucf54\ub4dc\uc5d0\uc11c ASCII\ub85c \ubcc0\ud658\ud574\uc57c \ud568).\n\n\uac01 \uc5b8\uc5b4 \ubcc4\ub85c \uc774\ub984 \ubaa9\ub85d \uc0ac\uc804 ``{language: [names ...]}`` \uc744 \ub9cc\ub4ed\ub2c8\ub2e4. \n\uc77c\ubc18 \ubcc0\uc218 \"category\" \uc640 \"line\" (\uc6b0\ub9ac\uc758 \uacbd\uc6b0 \uc5b8\uc5b4\uc640 \uc774\ub984)\uc740 \uc774\ud6c4\uc758 \ud655\uc7a5\uc131\uc744 \uc704\ud574 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\n\n.. NOTE::\n\uc5ed\uc790 \uc8fc: \"line\" \uc5d0 \uc785\ub825\uc744 \"category\"\uc5d0 \ud074\ub798\uc2a4\ub97c \uc801\uc6a9\ud558\uc5ec \ub2e4\ub978 \ubb38\uc81c\uc5d0\ub3c4 \ud65c\uc6a9 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uc5ec\uae30\uc11c\ub294 \"line\"\uc5d0 \uc774\ub984(ex. Robert )\ub97c \uc785\ub825\uc73c\ub85c \"category\"\uc5d0 \ud074\ub798\uc2a4(ex. english)\ub85c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n"
18+
"\n\uae30\ucd08\ubd80\ud130 \uc2dc\uc791\ud558\ub294 NLP: \ubb38\uc790-\ub2e8\uc704 RNN\uc73c\ub85c \uc774\ub984 \ubd84\ub958\ud558\uae30\n********************************************************************************\n**Author**: `Sean Robertson <https://github.com/spro/practical-pytorch>`_\n **\ubc88\uc5ed**: `\ud669\uc131\uc218 <https://github.com/adonisues>`_\n\n\n\ub2e8\uc5b4\ub97c \ubd84\ub958\ud558\uae30 \uc704\ud574 \uae30\ucd08\uc801\uc778 \ubb38\uc790-\ub2e8\uc704 RNN\uc744 \uad6c\ucd95\ud558\uace0 \ud559\uc2b5 \ud560 \uc608\uc815\uc785\ub2c8\ub2e4.\n\uc774 \ud29c\ud1a0\ub9ac\uc5bc\uc5d0\uc11c\ub294 (\uc774\ud6c4 2\uac1c \ud29c\ud1a0\ub9ac\uc5bc\uacfc \ud568\uaed8) NLP \ubaa8\ub378\ub9c1\uc744 \uc704\ud55c \ub370\uc774\ud130 \uc804\ucc98\ub9ac\ub97c \n`torchtext` \uc758 \ud3b8\ub9ac\ud55c \ub9ce\uc740 \uae30\ub2a5\ub4e4\uc744 \uc0ac\uc6a9\ud558\uc9c0 \uc54a\uace0 \uc5b4\ub5bb\uac8c \ud558\ub294\uc9c0 \"\uae30\ucd08\ubd80\ud130(from scratch)\" \n\ubcf4\uc5ec\uc8fc\uae30 \ub584\ubb38\uc5d0 NLP \ubaa8\ub378\ub9c1\uc744 \uc704\ud55c \uc804\ucc98\ub9ac\uac00 \uc800\uc218\uc900\uc5d0\uc11c \uc5b4\ub5bb\uac8c \uc9c4\ud589\ub418\ub294\uc9c0\ub97c \uc54c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\ubb38\uc790-\ub2e8\uc704 RNN\uc740 \ub2e8\uc5b4\ub97c \ubb38\uc790\uc758 \uc5f0\uc18d\uc73c\ub85c \uc77d\uc5b4 \ub4e4\uc5ec\uc11c \uac01 \ub2e8\uacc4\uc758 \uc608\uce21\uacfc \n\"\uc740\ub2c9 \uc0c1\ud0dc(Hidden State)\" \ucd9c\ub825\ud558\uace0, \ub2e4\uc74c \ub2e8\uacc4\uc5d0 \uc774\uc804 \uc740\ub2c9 \uc0c1\ud0dc\ub97c \uc804\ub2ec\ud569\ub2c8\ub2e4. \n\ub2e8\uc5b4\uac00 \uc18d\ud55c \ud074\ub798\uc2a4\ub85c \ucd9c\ub825\uc774 \ub418\ub3c4\ub85d \ucd5c\uc885 \uc608\uce21\uc73c\ub85c \uc120\ud0dd\ud569\ub2c8\ub2e4.\n\n\uad6c\uccb4\uc801\uc73c\ub85c, 18\uac1c \uc5b8\uc5b4\ub85c \ub41c \uc218\ucc9c \uac1c\uc758 \uc131(\u59d3)\uc744 \ud6c8\ub828\uc2dc\ud0a4\uace0, \n\ucca0\uc790\uc5d0 \ub530\ub77c \uc774\ub984\uc774 \uc5b4\ub5a4 \uc5b8\uc5b4\uc778\uc9c0 \uc608\uce21\ud569\ub2c8\ub2e4:\n\n::\n\n $ python predict.py Hinton\n (-0.47) Scottish\n (-1.52) English\n (-3.57) Irish\n\n $ python predict.py Schmidhuber\n (-0.19) German\n (-2.48) Czech\n (-2.68) Dutch\n\n\n**\ucd94\ucc9c \uc790\ub8cc:**\n\nPytorch\ub97c \uc124\uce58\ud588\uace0, Python\uc744 \uc54c\uace0, Tensor\ub97c \uc774\ud574\ud55c\ub2e4\uace0 \uac00\uc815\ud569\ub2c8\ub2e4:\n\n- https://pytorch.org/ \uc124\uce58 \uc548\ub0b4\n- :doc:`/beginner/deep_learning_60min_blitz` PyTorch \uc2dc\uc791\ud558\uae30\n- :doc:`/beginner/pytorch_with_examples` \ub113\uace0 \uae4a\uc740 \ud1b5\ucc30\uc744 \uc704\ud55c \uc790\ub8cc\n- :doc:`/beginner/former_torchies_tutorial` \uc774\uc804 Lua Torch \uc0ac\uc6a9\uc790\ub97c \uc704\ud55c \uc790\ub8cc\n\nRNN\uacfc \uc791\ub3d9 \ubc29\uc2dd\uc744 \uc544\ub294 \uac83 \ub610\ud55c \uc720\uc6a9\ud569\ub2c8\ub2e4:\n\n- `The Unreasonable Effectiveness of Recurrent Neural\n Networks <https://karpathy.github.io/2015/05/21/rnn-effectiveness/>`__\n \uc2e4\uc0dd\ud65c \uc608\uc81c\ub97c \ubcf4\uc5ec \uc90d\ub2c8\ub2e4.\n- `Understanding LSTM\n Networks <https://colah.github.io/posts/2015-08-Understanding-LSTMs/>`__\n LSTM\uc5d0 \uad00\ud55c \uac83\uc774\uc9c0\ub9cc RNN\uc5d0 \uad00\ud574\uc11c\ub3c4 \uc720\uc775\ud569\ub2c8\ub2e4.\n\n\ub370\uc774\ud130 \uc900\ube44\n==================\n\n.. NOTE::\n `\uc5ec\uae30 <https://download.pytorch.org/tutorial/data.zip>`__ \uc5d0\uc11c \ub370\uc774\ud130\ub97c \ub2e4\uc6b4 \ubc1b\uace0,\n \ud604\uc7ac \ub514\ub809\ud1a0\ub9ac\uc5d0 \uc555\ucd95\uc744 \ud478\uc2ed\uc2dc\uc624.\n\n``data/names`` \ub514\ub809\ud1a0\ub9ac\uc5d0\ub294 \"[Language].txt\" \ub77c\ub294 18 \uac1c\uc758 \ud14d\uc2a4\ud2b8 \ud30c\uc77c\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\uac01 \ud30c\uc77c\uc5d0\ub294 \ud55c \uc904\uc5d0 \ud558\ub098\uc758 \uc774\ub984\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc73c\uba70 \ub300\ubd80\ubd84 \ub85c\ub9c8\uc790\ub85c \ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4\n(\uadf8\ub7ec\ub098, \uc720\ub2c8\ucf54\ub4dc\uc5d0\uc11c ASCII\ub85c \ubcc0\ud658\ud574\uc57c \ud568).\n\n\uac01 \uc5b8\uc5b4 \ubcc4\ub85c \uc774\ub984 \ubaa9\ub85d \uc0ac\uc804 ``{language: [names ...]}`` \uc744 \ub9cc\ub4ed\ub2c8\ub2e4. \n\uc77c\ubc18 \ubcc0\uc218 \"category\" \uc640 \"line\" (\uc6b0\ub9ac\uc758 \uacbd\uc6b0 \uc5b8\uc5b4\uc640 \uc774\ub984)\uc740 \uc774\ud6c4\uc758 \ud655\uc7a5\uc131\uc744 \uc704\ud574 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\n\n.. NOTE::\n\uc5ed\uc790 \uc8fc: \"line\" \uc5d0 \uc785\ub825\uc744 \"category\"\uc5d0 \ud074\ub798\uc2a4\ub97c \uc801\uc6a9\ud558\uc5ec \ub2e4\ub978 \ubb38\uc81c\uc5d0\ub3c4 \ud65c\uc6a9 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uc5ec\uae30\uc11c\ub294 \"line\"\uc5d0 \uc774\ub984(ex. Robert )\ub97c \uc785\ub825\uc73c\ub85c \"category\"\uc5d0 \ud074\ub798\uc2a4(ex. english)\ub85c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n"
1919
]
2020
},
2121
{
@@ -308,7 +308,7 @@
308308
"name": "python",
309309
"nbconvert_exporter": "python",
310310
"pygments_lexer": "ipython3",
311-
"version": "3.6.7"
311+
"version": "3.7.4"
312312
}
313313
},
314314
"nbformat": 4,

โ€Ždocs/_downloads/1b58d206e701317cf46c92dcf2a8978d/parallelism_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n\uba40\ud2f0-GPU \uc608\uc81c\n==================\n\n\ub370\uc774\ud130 \ubcd1\ub82c \ucc98\ub9ac(Data Parallelism)\ub294 \ubbf8\ub2c8-\ubc30\uce58\ub97c \uc5ec\ub7ec \uac1c\uc758 \ub354 \uc791\uc740 \ubbf8\ub2c8-\ubc30\uce58\ub85c\n\uc790\ub974\uace0 \uac01\uac01\uc758 \uc791\uc740 \ubbf8\ub2c8\ubc30\uce58\ub97c \ubcd1\ub82c\uc801\uc73c\ub85c \uc5f0\uc0b0\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n\ub370\uc774\ud130 \ubcd1\ub82c \ucc98\ub9ac\ub294 ``torch.nn.DataParallel`` \uc744 \uc0ac\uc6a9\ud558\uc5ec \uad6c\ud604\ud569\ub2c8\ub2e4.\n``DataParallel`` \ub85c \uac10\uc300 \uc218 \uc788\ub294 \ubaa8\ub4c8\uc740 \ubc30\uce58 \ucc28\uc6d0(batch dimension)\uc5d0\uc11c\n\uc5ec\ub7ec GPU\ub85c \ubcd1\ub82c \ucc98\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\nDataParallel\n-------------\n\n"
18+
"\n\uba40\ud2f0-GPU \uc608\uc81c\n==================\n\n\ub370\uc774\ud130 \ubcd1\ub82c \ucc98\ub9ac(Data Parallelism)\ub294 \ubbf8\ub2c8-\ubc30\uce58\ub97c \uc5ec\ub7ec \uac1c\uc758 \ub354 \uc791\uc740 \ubbf8\ub2c8-\ubc30\uce58\ub85c\n\uc790\ub974\uace0 \uac01\uac01\uc758 \uc791\uc740 \ubbf8\ub2c8\ubc30\uce58\ub97c \ubcd1\ub82c\uc801\uc73c\ub85c \uc5f0\uc0b0\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n\ub370\uc774\ud130 \ubcd1\ub82c \ucc98\ub9ac\ub294 ``torch.nn.DataParallel`` \uc744 \uc0ac\uc6a9\ud558\uc5ec \uad6c\ud604\ud569\ub2c8\ub2e4.\n``DataParallel`` \ub85c \uac10\uc300 \uc218 \uc788\ub294 \ubaa8\ub4c8\uc740 \ubc30\uce58 \ucc28\uc6d0(batch dimension)\uc5d0\uc11c\n\uc5ec\ub7ec GPU\ub85c \ubcd1\ub82c \ucc98\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\nDataParallel\n-------------\n"
1919
]
2020
},
2121
{
@@ -107,7 +107,7 @@
107107
"name": "python",
108108
"nbconvert_exporter": "python",
109109
"pygments_lexer": "ipython3",
110-
"version": "3.6.7"
110+
"version": "3.7.4"
111111
}
112112
},
113113
"nbformat": 4,

0 commit comments

Comments
ย (0)