Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Translate recipes_index #3

Merged
merged 2 commits into from
Jul 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: ff98f6fae0a75c232c4a4aa789f50b6c
config: 35f9c8212976e13237c532396976720c
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file modified docs/.doctrees/environment.pickle
Binary file not shown.
Binary file modified docs/.doctrees/recipes/recipes_index.doctree
Binary file not shown.
Binary file modified docs/.doctrees/recipes/torchscript_inference.doctree
Binary file not shown.
134 changes: 67 additions & 67 deletions docs/_sources/recipes/recipes_index.rst.txt

Large diffs are not rendered by default.

62 changes: 29 additions & 33 deletions docs/_sources/recipes/torchscript_inference.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -48,20 +48,20 @@ TorchScript 是使用 PyTorch 模型进行大规模推理的推荐模型格式
r18_scripted = torch.jit.script(r18) # *** 这是 TorchScript 导出
dummy_input = torch.rand(1, 3, 224, 224) # 快速测试一下

Let’s do a sanity check on the equivalence of the two models:
让我们快速检查一下两个模型的等价性:

::

unscripted_output = r18(dummy_input) # Get the unscripted model's prediction...
scripted_output = r18_scripted(dummy_input) # ...and do the same for the scripted version
unscripted_output = r18(dummy_input) # 获取未脚本化模型的预测...
scripted_output = r18_scripted(dummy_input) # ...并对脚本化版本做同样操作

unscripted_top5 = F.softmax(unscripted_output, dim=1).topk(5).indices
scripted_top5 = F.softmax(scripted_output, dim=1).topk(5).indices

print('Python model top 5 results:\n {}'.format(unscripted_top5))
print('TorchScript model top 5 results:\n {}'.format(scripted_top5))

You should see that both versions of the model give the same results:
会看到两个版本的模型给出相同的结果:

::

Expand All @@ -70,16 +70,17 @@ You should see that both versions of the model give the same results:
TorchScript model top 5 results:
tensor([[463, 600, 731, 899, 898]])

With that check confirmed, go ahead and save the model:
确认检查通过后,继续保存模型:

::

r18_scripted.save('r18_scripted.pt')

Loading TorchScript Models in C++
在 C++ 中加载 TorchScript 模型
---------------------------------

Create the following C++ file and name it ``ts-infer.cpp``:
创建以下 C++ 文件,并将其命名为 ``ts-infer.cpp``:

.. code:: cpp

Expand All @@ -95,7 +96,7 @@ Create the following C++ file and name it ``ts-infer.cpp``:

std::cout << "Loading model...\n";

// deserialize ScriptModule
// 反序列化 ScriptModule
torch::jit::script::Module module;
try {
module = torch::jit::load(argv[1]);
Expand All @@ -107,14 +108,14 @@ Create the following C++ file and name it ``ts-infer.cpp``:

std::cout << "Model loaded successfully\n";

torch::NoGradGuard no_grad; // ensures that autograd is off
module.eval(); // turn off dropout and other training-time layers/functions
torch::NoGradGuard no_grad; // 确保自动梯度计算关闭
module.eval(); // 关闭 dropout 和其他训练时层/函数

// create an input "image"
// 创建一个输入"图像"
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::rand({1, 3, 224, 224}));

// execute model and package output as tensor
// 执行模型并将输出打包为张量
at::Tensor output = module.forward(inputs).toTensor();

namespace F = torch::nn::functional;
Expand All @@ -128,20 +129,20 @@ Create the following C++ file and name it ``ts-infer.cpp``:
return 0;
}

This program:
程序步骤:

- Loads the model you specify on the command line
- Creates a dummy “image” input tensor
- Performs inference on the input
- 加载您在命令行上指定的模型
- 创建一个虚拟的"图像"输入张量
- 对输入执行推理

Also, notice that there is no dependency on TorchVision in this code.
The saved version of your TorchScript model has your learning weights
*and* your computation graph - nothing else is needed.
另外,请注意这段代码中没有依赖 TorchVision。
保存的 TorchScript 模型包含您的学习权重和您的计算图。

Building and Running Your C++ Inference Engine
构建和运行您的 C++ 推理引擎
----------------------------------------------

Create the following ``CMakeLists.txt`` file:
创建以下 ``CMakeLists.txt`` 文件:


::

Expand All @@ -154,14 +155,14 @@ Create the following ``CMakeLists.txt`` file:
target_link_libraries(ts-infer "${TORCH_LIBRARIES}")
set_property(TARGET ts-infer PROPERTY CXX_STANDARD 11)

Make the program:
构建程序:

::

cmake -DCMAKE_PREFIX_PATH=<path to your libtorch installation>
make

Now, we can run inference in C++, and verify that we get a result:
现在,我们可以在 C++ 中运行推理,并验证我们得到结果:

::

Expand All @@ -177,18 +178,13 @@ Now, we can run inference in C++, and verify that we get a result:

DONE

Important Resources
其他资源
-------------------

- `pytorch.org`_ for installation instructions, and more documentation
and tutorials.
- `Introduction to TorchScript tutorial`_ for a deeper initial
exposition of TorchScript
- `Full TorchScript documentation`_ for complete TorchScript language
and API reference
- `pytorch.org`_ 查看安装说明和更多文档和教程。
- `TorchScript 入门教程`_ 对 TorchScript 进一步了解
- `TorchScript 文档`_ 查看完整的 TorchScript 语言和 API 参考

.. _pytorch.org: https://pytorch.org/
.. _Introduction to TorchScript tutorial: https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html
.. _Full TorchScript documentation: https://pytorch.org/docs/stable/jit.html
.. _在 C++ 中加载 TorchScript 模型教程: https://pytorch.org/tutorials/advanced/cpp_export.html
.. _full TorchScript documentation: https://pytorch.org/docs/stable/jit.html
.. _TorchScript 入门教程: https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html
.. _TorchScript 文档: https://pytorch.org/docs/stable/jit.html
Loading
Loading