Skip to content

Commit 06e3615

Browse files
committed
Rebuild
1 parent a04ca8d commit 06e3615

File tree

12 files changed

+324
-337
lines changed

12 files changed

+324
-337
lines changed

โ€Ždocs/_downloads/9b89023ea3fb5bf2511a9c08a4311cce/saving_multiple_models_in_one_file.ipynb

Lines changed: 20 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,14 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\nSaving and loading multiple models in one file using PyTorch\n============================================================\nSaving and loading multiple models can be helpful for reusing models\nthat you have previously trained.\n\nIntroduction\n------------\nWhen saving a model comprised of multiple ``torch.nn.Modules``, such as\na GAN, a sequence-to-sequence model, or an ensemble of models, you must\nsave a dictionary of each model\u2019s state_dict and corresponding\noptimizer. You can also save any other items that may aid you in\nresuming training by simply appending them to the dictionary.\nTo load the models, first initialize the models and optimizers, then\nload the dictionary locally using ``torch.load()``. From here, you can\neasily access the saved items by simply querying the dictionary as you\nwould expect.\nIn this recipe, we will demonstrate how to save multiple models to one\nfile using PyTorch.\n\nSetup\n-----\nBefore we begin, we need to install ``torch`` if it isn\u2019t already\navailable.\n\n::\n\n pip install torch\n \n\n"
18+
"\nPyTorch\uc5d0\uc11c \uc5ec\ub7ec \ubaa8\ub378\uc744 \ud558\ub098\uc758 \ud30c\uc77c\uc5d0 \uc800\uc7a5\ud558\uae30 & \ubd88\ub7ec\uc624\uae30\n============================================================\n\uc5ec\ub7ec \ubaa8\ub378\uc744 \uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc624\ub294 \uac83\uc740 \uc774\uc804\uc5d0 \ud559\uc2b5\ud588\ub358 \ubaa8\ub378\ub4e4\uc744 \uc7ac\uc0ac\uc6a9\ud558\ub294\ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n\uac1c\uc694\n------------\nGAN\uc774\ub098 \uc2dc\ud000\uc2a4-\ud22c-\uc2dc\ud000\uc2a4(sequence-to-sequence model), \uc559\uc0c1\ube14 \ubaa8\ub378(ensemble of models)\uacfc\n\uac19\uc774 \uc5ec\ub7ec ``torch.nn.Modules`` \ub85c \uad6c\uc131\ub41c \ubaa8\ub378\uc744 \uc800\uc7a5\ud560 \ub54c\ub294 \uac01 \ubaa8\ub378\uc758 state_dict\uc640\n\ud574\ub2f9 \uc635\ud2f0\ub9c8\uc774\uc800(optimizer)\uc758 \uc0ac\uc804\uc744 \uc800\uc7a5\ud574\uc57c \ud569\ub2c8\ub2e4. \ub610\ud55c, \ud559\uc2b5 \ud559\uc2b5\uc744 \uc7ac\uac1c\ud558\ub294\ub370\n\ud544\uc694\ud55c \ub2e4\ub978 \ud56d\ubaa9\ub4e4\uc744 \uc0ac\uc804\uc5d0 \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubaa8\ub378\ub4e4\uc744 \ubd88\ub7ec\uc62c \ub54c\uc5d0\ub294, \uba3c\uc800\n\ubaa8\ub378\ub4e4\uacfc \uc635\ud2f0\ub9c8\uc774\uc800\ub97c \ucd08\uae30\ud654\ud558\uace0, ``torch.load()`` \ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc804\uc744 \ubd88\ub7ec\uc635\ub2c8\ub2e4.\n\uc774\ud6c4 \uc6d0\ud558\ub294\ub300\ub85c \uc800\uc7a5\ud55c \ud56d\ubaa9\ub4e4\uc744 \uc0ac\uc804\uc5d0 \uc870\ud68c\ud558\uc5ec \uc811\uadfc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uc774 \ub808\uc2dc\ud53c\uc5d0\uc11c\ub294 PyTorch\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5ec\ub7ec \ubaa8\ub378\ub4e4\uc744 \ud558\ub098\uc758 \ud30c\uc77c\uc5d0 \uc5b4\ub5bb\uac8c \uc800\uc7a5\ud558\uace0\n\ubd88\ub7ec\uc624\ub294\uc9c0 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\uc124\uc815\n---------\n\uc2dc\uc791\ud558\uae30 \uc804\uc5d0 ``torch`` \uac00 \uc5c6\ub2e4\uba74 \uc124\uce58\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n::\n\n pip install torch\n\n\n"
1919
]
2020
},
2121
{
2222
"cell_type": "markdown",
2323
"metadata": {},
2424
"source": [
25-
"Steps\n-----\n\n1. Import all necessary libraries for loading our data\n2. Define and intialize the neural network\n3. Initialize the optimizer\n4. Save multiple models\n5. Load multiple models\n\n1. Import necessary libraries for loading our data\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``\nand ``torch.optim``.\n\n\n"
25+
"\ub2e8\uacc4(Steps)\n-------------\n\n1. \ub370\uc774\ud130 \ubd88\ub7ec\uc62c \ub54c \ud544\uc694\ud55c \ub77c\uc774\ube0c\ub7ec\ub9ac\ub4e4 \ubd88\ub7ec\uc624\uae30\n2. \uc2e0\uacbd\ub9dd\uc744 \uad6c\uc131\ud558\uace0 \ucd08\uae30\ud654\ud558\uae30\n3. \uc635\ud2f0\ub9c8\uc774\uc800 \ucd08\uae30\ud654\ud558\uae30\n4. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \uc800\uc7a5\ud558\uae30\n5. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \ubd88\ub7ec\uc624\uae30\n\n1. \ub370\uc774\ud130 \ubd88\ub7ec\uc62c \ub54c \ud544\uc694\ud55c \ub77c\uc774\ube0c\ub7ec\ub9ac\ub4e4 \ubd88\ub7ec\uc624\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc774 \ub808\uc2dc\ud53c\uc5d0\uc11c\ub294 ``torch`` \uc640 \uc5ec\uae30 \ud3ec\ud568\ub41c ``torch.nn`` \uc640 ``torch.optim` \uc744\n\uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n"
2626
]
2727
},
2828
{
@@ -40,7 +40,7 @@
4040
"cell_type": "markdown",
4141
"metadata": {},
4242
"source": [
43-
"2. Define and intialize the neural network\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor sake of example, we will create a neural network for training\nimages. To learn more see the Defining a Neural Network recipe. Build\ntwo variables for the models to eventually save.\n\n\n"
43+
"2. \uc2e0\uacbd\ub9dd\uc744 \uad6c\uc131\ud558\uace0 \ucd08\uae30\ud654\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc608\ub97c \ub4e4\uc5b4, \uc774\ubbf8\uc9c0\ub97c \ud559\uc2b5\ud558\ub294 \uc2e0\uacbd\ub9dd\uc744 \ub9cc\ub4e4\uc5b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \ub354 \uc790\uc138\ud55c \ub0b4\uc6a9\uc740\n\uc2e0\uacbd\ub9dd \uad6c\uc131\ud558\uae30 \ub808\uc2dc\ud53c\ub97c \ucc38\uace0\ud574\uc8fc\uc138\uc694. \ubaa8\ub378\uc744 \uc800\uc7a5\ud560 2\uac1c\uc758 \ubcc0\uc218\ub4e4\uc744 \ub9cc\ub4ed\ub2c8\ub2e4.\n\n\n"
4444
]
4545
},
4646
{
@@ -58,7 +58,7 @@
5858
"cell_type": "markdown",
5959
"metadata": {},
6060
"source": [
61-
"3. Initialize the optimizer\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWe will use SGD with momentum to build an optimizer for each model we\ncreated.\n\n\n"
61+
"3. \uc635\ud2f0\ub9c8\uc774\uc800 \ucd08\uae30\ud654\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc0dd\uc131\ud55c \ubaa8\ub378\ub4e4 \uac01\uac01\uc5d0 \ubaa8\uba58\ud140(momentum)\uc744 \uac16\ub294 SGD\ub97c \uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n"
6262
]
6363
},
6464
{
@@ -76,7 +76,7 @@
7676
"cell_type": "markdown",
7777
"metadata": {},
7878
"source": [
79-
"4. Save multiple models\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nCollect all relevant information and build your dictionary.\n\n\n"
79+
"4. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \uc800\uc7a5\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uad00\ub828\ub41c \ubaa8\ub4e0 \uc815\ubcf4\ub4e4\uc744 \ubaa8\uc544\uc11c \uc0ac\uc804\uc744 \uad6c\uc131\ud569\ub2c8\ub2e4.\n\n\n"
8080
]
8181
},
8282
{
@@ -87,14 +87,14 @@
8787
},
8888
"outputs": [],
8989
"source": [
90-
"# Specify a path to save to\nPATH = \"model.pt\"\n\ntorch.save({\n 'modelA_state_dict': netA.state_dict(),\n 'modelB_state_dict': netB.state_dict(),\n 'optimizerA_state_dict': optimizerA.state_dict(),\n 'optimizerB_state_dict': optimizerB.state_dict(),\n }, PATH)"
90+
"# \uc800\uc7a5\ud560 \uacbd\ub85c \uc9c0\uc815\nPATH = \"model.pt\"\n\ntorch.save({\n 'modelA_state_dict': netA.state_dict(),\n 'modelB_state_dict': netB.state_dict(),\n 'optimizerA_state_dict': optimizerA.state_dict(),\n 'optimizerB_state_dict': optimizerB.state_dict(),\n }, PATH)"
9191
]
9292
},
9393
{
9494
"cell_type": "markdown",
9595
"metadata": {},
9696
"source": [
97-
"4. Load multiple models\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRemember to first initialize the models and optimizers, then load the\ndictionary locally.\n\n\n"
97+
"5. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \ubd88\ub7ec\uc624\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uba3c\uc800 \ubaa8\ub378\uacfc \uc635\ud2f0\ub9c8\uc774\uc800\ub97c \ucd08\uae30\ud654\ud55c \ub4a4, \uc0ac\uc804\uc744 \ubd88\ub7ec\uc624\ub294 \uac83\uc744 \uae30\uc5b5\ud558\uc2ed\uc2dc\uc624.\n\n\n"
9898
]
9999
},
100100
{
@@ -105,14 +105,25 @@
105105
},
106106
"outputs": [],
107107
"source": [
108-
"modelA = Net()\nmodelB = Net()\noptimModelA = optim.SGD(modelA.parameters(), lr=0.001, momentum=0.9)\noptimModelB = optim.SGD(modelB.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodelA.load_state_dict(checkpoint['modelA_state_dict'])\nmodelB.load_state_dict(checkpoint['modelB_state_dict'])\noptimizerA.load_state_dict(checkpoint['optimizerA_state_dict'])\noptimizerB.load_state_dict(checkpoint['optimizerB_state_dict'])\n\nmodelA.eval()\nmodelB.eval()\n# - or -\nmodelA.train()\nmodelB.train()"
108+
"modelA = Net()\nmodelB = Net()\noptimModelA = optim.SGD(modelA.parameters(), lr=0.001, momentum=0.9)\noptimModelB = optim.SGD(modelB.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodelA.load_state_dict(checkpoint['modelA_state_dict'])\nmodelB.load_state_dict(checkpoint['modelB_state_dict'])\noptimizerA.load_state_dict(checkpoint['optimizerA_state_dict'])\noptimizerB.load_state_dict(checkpoint['optimizerB_state_dict'])\n\nmodelA.eval()\nmodelB.eval()\n# - \ub610\ub294 -\nmodelA.train()\nmodelB.train()"
109109
]
110110
},
111111
{
112112
"cell_type": "markdown",
113113
"metadata": {},
114114
"source": [
115-
"You must call ``model.eval()`` to set dropout and batch normalization\nlayers to evaluation mode before running inference. Failing to do this\nwill yield inconsistent inference results.\n\nIf you wish to resuming training, call ``model.train()`` to ensure these\nlayers are in training mode.\n\nCongratulations! You have successfully saved and loaded multiple models\nin PyTorch.\n\nLearn More\n----------\n\nTake a look at these other recipes to continue your learning:\n\n- TBD\n- TBD\n\n\n"
115+
"\ucd94\ub860(inference)\uc744 \uc2e4\ud589\ud558\uae30 \uc804\uc5d0 ``model.eval()`` \uc744 \ud638\ucd9c\ud558\uc5ec \ub4dc\ub86d\uc544\uc6c3(dropout)\uacfc\n\ubc30\uce58 \uc815\uaddc\ud654 \uce35(batch normalization layer)\uc744 \ud3c9\uac00(evaluation) \ubaa8\ub4dc\ub85c \ubc14\uafd4\uc57c\ud55c\ub2e4\ub294\n\uac83\uc744 \uae30\uc5b5\ud558\uc138\uc694. \uc774\uac83\uc744 \ube7c\uba39\uc73c\uba74 \uc77c\uad00\uc131 \uc5c6\ub294 \ucd94\ub860 \uacb0\uacfc\ub97c \uc5bb\uac8c \ub429\ub2c8\ub2e4.\n\n\ub9cc\uc57d \ud559\uc2b5\uc744 \uacc4\uc18d\ud558\uae38 \uc6d0\ud55c\ub2e4\uba74 ``model.train()`` \uc744 \ud638\ucd9c\ud558\uc5ec \uc774 \uce35(layer)\ub4e4\uc774\n\ud559\uc2b5 \ubaa8\ub4dc\uc778\uc9c0 \ud655\uc778(ensure)\ud558\uc138\uc694.\n\n\ucd95\ud558\ud569\ub2c8\ub2e4! \uc9c0\uae08\uae4c\uc9c0 PyTorch\uc5d0\uc11c \uc5ec\ub7ec \ubaa8\ub378\ub4e4\uc744 \uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc654\uc2b5\ub2c8\ub2e4.\n\n\ub354 \uc54c\uc544\ubcf4\uae30\n------------\n\n\ub2e4\ub978 \ub808\uc2dc\ud53c\ub97c \ub458\ub7ec\ubcf4\uace0 \uacc4\uc18d \ubc30\uc6cc\ubcf4\uc138\uc694:\n\n- :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint`\n- :doc:`/recipes/recipes/saving_multiple_models_in_one_file`\n\n"
116+
]
117+
},
118+
{
119+
"cell_type": "code",
120+
"execution_count": null,
121+
"metadata": {
122+
"collapsed": false
123+
},
124+
"outputs": [],
125+
"source": [
126+
"#"
116127
]
117128
}
118129
],
Lines changed: 66 additions & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -1,66 +1,60 @@
11
"""
2-
Saving and loading multiple models in one file using PyTorch
2+
PyTorch์—์„œ ์—ฌ๋Ÿฌ ๋ชจ๋ธ์„ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์ €์žฅํ•˜๊ธฐ & ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
33
============================================================
4-
Saving and loading multiple models can be helpful for reusing models
5-
that you have previously trained.
4+
์—ฌ๋Ÿฌ ๋ชจ๋ธ์„ ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์€ ์ด์ „์— ํ•™์Šตํ–ˆ๋˜ ๋ชจ๋ธ๋“ค์„ ์žฌ์‚ฌ์šฉํ•˜๋Š”๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค.
65
7-
Introduction
6+
๊ฐœ์š”
87
------------
9-
When saving a model comprised of multiple ``torch.nn.Modules``, such as
10-
a GAN, a sequence-to-sequence model, or an ensemble of models, you must
11-
save a dictionary of each modelโ€™s state_dict and corresponding
12-
optimizer. You can also save any other items that may aid you in
13-
resuming training by simply appending them to the dictionary.
14-
To load the models, first initialize the models and optimizers, then
15-
load the dictionary locally using ``torch.load()``. From here, you can
16-
easily access the saved items by simply querying the dictionary as you
17-
would expect.
18-
In this recipe, we will demonstrate how to save multiple models to one
19-
file using PyTorch.
20-
21-
Setup
22-
-----
23-
Before we begin, we need to install ``torch`` if it isnโ€™t already
24-
available.
8+
GAN์ด๋‚˜ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค(sequence-to-sequence model), ์•™์ƒ๋ธ” ๋ชจ๋ธ(ensemble of models)๊ณผ
9+
๊ฐ™์ด ์—ฌ๋Ÿฌ ``torch.nn.Modules`` ๋กœ ๊ตฌ์„ฑ๋œ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๋•Œ๋Š” ๊ฐ ๋ชจ๋ธ์˜ state_dict์™€
10+
ํ•ด๋‹น ์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer)์˜ ์‚ฌ์ „์„ ์ €์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ํ•™์Šต ํ•™์Šต์„ ์žฌ๊ฐœํ•˜๋Š”๋ฐ
11+
ํ•„์š”ํ•œ ๋‹ค๋ฅธ ํ•ญ๋ชฉ๋“ค์„ ์‚ฌ์ „์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ๋“ค์„ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ์—๋Š”, ๋จผ์ €
12+
๋ชจ๋ธ๋“ค๊ณผ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์ดˆ๊ธฐํ™”ํ•˜๊ณ , ``torch.load()`` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „์„ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค.
13+
์ดํ›„ ์›ํ•˜๋Š”๋Œ€๋กœ ์ €์žฅํ•œ ํ•ญ๋ชฉ๋“ค์„ ์‚ฌ์ „์— ์กฐํšŒํ•˜์—ฌ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
14+
์ด ๋ ˆ์‹œํ”ผ์—์„œ๋Š” PyTorch๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค์„ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์–ด๋–ป๊ฒŒ ์ €์žฅํ•˜๊ณ 
15+
๋ถˆ๋Ÿฌ์˜ค๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
16+
17+
์„ค์ •
18+
---------
19+
์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ``torch`` ๊ฐ€ ์—†๋‹ค๋ฉด ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
2520
2621
::
2722
2823
pip install torch
29-
24+
3025
"""
3126

3227

3328

3429
######################################################################
35-
# Steps
36-
# -----
37-
#
38-
# 1. Import all necessary libraries for loading our data
39-
# 2. Define and intialize the neural network
40-
# 3. Initialize the optimizer
41-
# 4. Save multiple models
42-
# 5. Load multiple models
43-
#
44-
# 1. Import necessary libraries for loading our data
30+
# ๋‹จ๊ณ„(Steps)
31+
# -------------
32+
#
33+
# 1. ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
34+
# 2. ์‹ ๊ฒฝ๋ง์„ ๊ตฌ์„ฑํ•˜๊ณ  ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
35+
# 3. ์˜ตํ‹ฐ๋งˆ์ด์ € ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
36+
# 4. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ์ €์žฅํ•˜๊ธฐ
37+
# 5. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
38+
#
39+
# 1. ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
4540
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
46-
#
47-
# For this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``
48-
# and ``torch.optim``.
49-
#
41+
#
42+
# ์ด ๋ ˆ์‹œํ”ผ์—์„œ๋Š” ``torch`` ์™€ ์—ฌ๊ธฐ ํฌํ•จ๋œ ``torch.nn`` ์™€ ``torch.optim` ์„
43+
# ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
44+
#
5045

5146
import torch
5247
import torch.nn as nn
5348
import torch.optim as optim
5449

5550

5651
######################################################################
57-
# 2. Define and intialize the neural network
52+
# 2. ์‹ ๊ฒฝ๋ง์„ ๊ตฌ์„ฑํ•˜๊ณ  ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
5853
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
59-
#
60-
# For sake of example, we will create a neural network for training
61-
# images. To learn more see the Defining a Neural Network recipe. Build
62-
# two variables for the models to eventually save.
63-
#
54+
#
55+
# ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด๋ฏธ์ง€๋ฅผ ํ•™์Šตํ•˜๋Š” ์‹ ๊ฒฝ๋ง์„ ๋งŒ๋“ค์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€
56+
# ์‹ ๊ฒฝ๋ง ๊ตฌ์„ฑํ•˜๊ธฐ ๋ ˆ์‹œํ”ผ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. ๋ชจ๋ธ์„ ์ €์žฅํ•  2๊ฐœ์˜ ๋ณ€์ˆ˜๋“ค์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค.
57+
#
6458

6559
class Net(nn.Module):
6660
def __init__(self):
@@ -86,25 +80,24 @@ def forward(self, x):
8680

8781

8882
######################################################################
89-
# 3. Initialize the optimizer
83+
# 3. ์˜ตํ‹ฐ๋งˆ์ด์ € ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
9084
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
91-
#
92-
# We will use SGD with momentum to build an optimizer for each model we
93-
# created.
94-
#
85+
#
86+
# ์ƒ์„ฑํ•œ ๋ชจ๋ธ๋“ค ๊ฐ๊ฐ์— ๋ชจ๋ฉ˜ํ…€(momentum)์„ ๊ฐ–๋Š” SGD๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
87+
#
9588

9689
optimizerA = optim.SGD(netA.parameters(), lr=0.001, momentum=0.9)
9790
optimizerB = optim.SGD(netB.parameters(), lr=0.001, momentum=0.9)
9891

9992

10093
######################################################################
101-
# 4. Save multiple models
94+
# 4. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ์ €์žฅํ•˜๊ธฐ
10295
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
103-
#
104-
# Collect all relevant information and build your dictionary.
105-
#
96+
#
97+
# ๊ด€๋ จ๋œ ๋ชจ๋“  ์ •๋ณด๋“ค์„ ๋ชจ์•„์„œ ์‚ฌ์ „์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค.
98+
#
10699

107-
# Specify a path to save to
100+
# ์ €์žฅํ•  ๊ฒฝ๋กœ ์ง€์ •
108101
PATH = "model.pt"
109102

110103
torch.save({
@@ -116,12 +109,11 @@ def forward(self, x):
116109

117110

118111
######################################################################
119-
# 4. Load multiple models
112+
# 5. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
120113
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
121-
#
122-
# Remember to first initialize the models and optimizers, then load the
123-
# dictionary locally.
124-
#
114+
#
115+
# ๋จผ์ € ๋ชจ๋ธ๊ณผ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์ดˆ๊ธฐํ™”ํ•œ ๋’ค, ์‚ฌ์ „์„ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์‹ญ์‹œ์˜ค.
116+
#
125117

126118
modelA = Net()
127119
modelB = Net()
@@ -136,27 +128,26 @@ def forward(self, x):
136128

137129
modelA.eval()
138130
modelB.eval()
139-
# - or -
131+
# - ๋˜๋Š” -
140132
modelA.train()
141133
modelB.train()
142134

143135

144136
######################################################################
145-
# You must call ``model.eval()`` to set dropout and batch normalization
146-
# layers to evaluation mode before running inference. Failing to do this
147-
# will yield inconsistent inference results.
148-
#
149-
# If you wish to resuming training, call ``model.train()`` to ensure these
150-
# layers are in training mode.
151-
#
152-
# Congratulations! You have successfully saved and loaded multiple models
153-
# in PyTorch.
154-
#
155-
# Learn More
156-
# ----------
157-
#
158-
# Take a look at these other recipes to continue your learning:
159-
#
160-
# - TBD
161-
# - TBD
162-
#
137+
# ์ถ”๋ก (inference)์„ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— ``model.eval()`` ์„ ํ˜ธ์ถœํ•˜์—ฌ ๋“œ๋กญ์•„์›ƒ(dropout)๊ณผ
138+
# ๋ฐฐ์น˜ ์ •๊ทœํ™” ์ธต(batch normalization layer)์„ ํ‰๊ฐ€(evaluation) ๋ชจ๋“œ๋กœ ๋ฐ”๊ฟ”์•ผํ•œ๋‹ค๋Š”
139+
# ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ์ด๊ฒƒ์„ ๋นผ๋จน์œผ๋ฉด ์ผ๊ด€์„ฑ ์—†๋Š” ์ถ”๋ก  ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
140+
#
141+
# ๋งŒ์•ฝ ํ•™์Šต์„ ๊ณ„์†ํ•˜๊ธธ ์›ํ•œ๋‹ค๋ฉด ``model.train()`` ์„ ํ˜ธ์ถœํ•˜์—ฌ ์ด ์ธต(layer)๋“ค์ด
142+
# ํ•™์Šต ๋ชจ๋“œ์ธ์ง€ ํ™•์ธ(ensure)ํ•˜์„ธ์š”.
143+
#
144+
# ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ์ง€๊ธˆ๊นŒ์ง€ PyTorch์—์„œ ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค์„ ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์™”์Šต๋‹ˆ๋‹ค.
145+
#
146+
# ๋” ์•Œ์•„๋ณด๊ธฐ
147+
# ------------
148+
#
149+
# ๋‹ค๋ฅธ ๋ ˆ์‹œํ”ผ๋ฅผ ๋‘˜๋Ÿฌ๋ณด๊ณ  ๊ณ„์† ๋ฐฐ์›Œ๋ณด์„ธ์š”:
150+
#
151+
# - :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint`
152+
# - :doc:`/recipes/recipes/saving_multiple_models_in_one_file`
153+
#

0 commit comments

Comments
ย (0)