Skip to content

Commit 734281c

Browse files
albanDpytorchmergebot
authored andcommitted
Cleanup all module references in doc (pytorch#73983)
Summary: Working towards https://docs.google.com/document/d/10yx2-4gs0gTMOimVS403MnoAWkqitS8TUHX73PN8EjE/edit?pli=1# This PR: - Ensure that all the submodules are listed in a rst file (that ensure they are considered by the coverage tool) - Remove some long deprecated code that just error out on import - Remove the allow list altogether to ensure nothing gets added back there Pull Request resolved: pytorch#73983 Reviewed By: anjali411 Differential Revision: D34787908 Pulled By: albanD fbshipit-source-id: 163ce61e133b12b2f2e1cbe374f979e3d6858db7 (cherry picked from commit c9edfea)
1 parent 6656c71 commit 734281c

21 files changed

+205
-135
lines changed

docs/source/amp.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,11 @@
44
Automatic Mixed Precision package - torch.cuda.amp
55
==================================================
66

7+
.. Both modules below are missing doc entry. Adding them here for now.
8+
.. This does not add anything to the rendered page
9+
.. py:module:: torch.cpu
10+
.. py:module:: torch.cpu.amp
11+
712
.. automodule:: torch.cuda.amp
813
.. currentmodule:: torch.cuda.amp
914

docs/source/backends.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33

44
torch.backends
55
==============
6+
.. automodule:: torch.backends
67

78
`torch.backends` controls the behavior of various backends that PyTorch supports.
89

@@ -17,6 +18,7 @@ These backends include:
1718

1819
torch.backends.cuda
1920
^^^^^^^^^^^^^^^^^^^
21+
.. automodule:: torch.backends.cuda
2022

2123
.. autofunction:: torch.backends.cuda.is_built
2224

@@ -50,6 +52,7 @@ torch.backends.cuda
5052

5153
torch.backends.cudnn
5254
^^^^^^^^^^^^^^^^^^^^
55+
.. automodule:: torch.backends.cudnn
5356

5457
.. autofunction:: torch.backends.cudnn.version
5558

@@ -78,17 +81,26 @@ torch.backends.cudnn
7881

7982
torch.backends.mkl
8083
^^^^^^^^^^^^^^^^^^
84+
.. automodule:: torch.backends.mkl
8185

8286
.. autofunction:: torch.backends.mkl.is_available
8387

8488

8589
torch.backends.mkldnn
8690
^^^^^^^^^^^^^^^^^^^^^
91+
.. automodule:: torch.backends.mkldnn
8792

8893
.. autofunction:: torch.backends.mkldnn.is_available
8994

9095

9196
torch.backends.openmp
9297
^^^^^^^^^^^^^^^^^^^^^
98+
.. automodule:: torch.backends.openmp
9399

94100
.. autofunction:: torch.backends.openmp.is_available
101+
102+
.. Docs for other backends need to be added here.
103+
.. Automodules are just here to ensure checks run but they don't actually
104+
.. add anything to the rendered page for now.
105+
.. py:module:: torch.backends.quantized
106+
.. py:module:: torch.backends.xnnpack

docs/source/benchmark_utils.rst

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,10 @@ Benchmark Utils - torch.utils.benchmark
1818

1919
.. autoclass:: FunctionCounts
2020
:members:
21+
22+
.. These are missing documentation. Adding them here until a better place
23+
.. is made in this file.
24+
.. py:module:: torch.utils.benchmark.examples
25+
.. py:module:: torch.utils.benchmark.op_fuzzers
26+
.. py:module:: torch.utils.benchmark.utils
27+
.. py:module:: torch.utils.benchmark.utils.valgrind_wrapper

docs/source/bottleneck.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
torch.utils.bottleneck
22
======================
33

4+
.. automodule:: torch.utils.bottleneck
45
.. currentmodule:: torch.utils.bottleneck
56

67
`torch.utils.bottleneck` is a tool that can be used as an initial step for

docs/source/conf.py

Lines changed: 60 additions & 115 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,8 @@
8686
# TODO: document these and remove them from here.
8787

8888
coverage_ignore_functions = [
89+
# torch
90+
"typename",
8991
# torch.autograd
9092
"register_py_tensor_class_for_device",
9193
"variable",
@@ -129,9 +131,41 @@
129131
"execWrapper",
130132
# torch.onnx
131133
"unregister_custom_op_symbolic",
134+
# torch.ao.quantization
135+
"default_eval_fn",
136+
# torch.ao.quantization.fx.backend_config
137+
"validate_backend_config_dict",
138+
# torch.backends
139+
"disable_global_flags",
140+
"flags_frozen",
141+
# torch.distributed.algorithms.ddp_comm_hooks
142+
"register_ddp_comm_hook",
143+
# torch.nn
144+
"factory_kwargs",
145+
# torch.nn.parallel
146+
"DistributedDataParallelCPU",
147+
# torch.utils
148+
"set_module",
149+
# torch.utils.model_dump
150+
"burn_in_info",
151+
"get_info_and_burn_skeleton",
152+
"get_inline_skeleton",
153+
"get_model_info",
154+
"get_storage_info",
155+
"hierarchical_pickle",
132156
]
133157

134158
coverage_ignore_classes = [
159+
# torch
160+
"FatalError",
161+
"QUInt2x4Storage",
162+
"Size",
163+
"Storage",
164+
"Stream",
165+
"Tensor",
166+
"finfo",
167+
"iinfo",
168+
"qscheme",
135169
# torch.cuda
136170
"BFloat16Storage",
137171
"BFloat16Tensor",
@@ -197,109 +231,25 @@
197231
# torch.onnx
198232
"CheckerError",
199233
"ExportTypes",
234+
# torch.backends
235+
"ContextProp",
236+
"PropModule",
237+
# torch.backends.cuda
238+
"cuBLASModule",
239+
"cuFFTPlanCache",
240+
"cuFFTPlanCacheAttrContextProp",
241+
"cuFFTPlanCacheManager",
242+
# torch.distributed.algorithms.ddp_comm_hooks
243+
"DDPCommHookType",
244+
# torch.jit.mobile
245+
"LiteScriptModule",
246+
# torch.nn.quantized.modules
247+
"DeQuantize",
248+
"Quantize",
249+
# torch.utils.backcompat
250+
"Warning",
200251
]
201252

202-
# List of modules that do not have automodule/py:module in the doc yet
203-
# We should NOT add anything to this list, see the CI failure message
204-
# on how to solve missing automodule issues
205-
coverage_missing_automodule = [
206-
"torch",
207-
"torch.ao",
208-
"torch.ao.nn",
209-
"torch.ao.nn.sparse",
210-
"torch.ao.nn.sparse.quantized",
211-
"torch.ao.nn.sparse.quantized.dynamic",
212-
"torch.ao.ns",
213-
"torch.ao.ns.fx",
214-
"torch.ao.quantization",
215-
"torch.ao.quantization.fx",
216-
"torch.ao.quantization.fx.backend_config",
217-
"torch.ao.sparsity",
218-
"torch.ao.sparsity.experimental",
219-
"torch.ao.sparsity.experimental.pruner",
220-
"torch.ao.sparsity.scheduler",
221-
"torch.ao.sparsity.sparsifier",
222-
"torch.backends",
223-
"torch.backends.cuda",
224-
"torch.backends.cudnn",
225-
"torch.backends.mkl",
226-
"torch.backends.mkldnn",
227-
"torch.backends.openmp",
228-
"torch.backends.quantized",
229-
"torch.backends.xnnpack",
230-
"torch.contrib",
231-
"torch.cpu",
232-
"torch.cpu.amp",
233-
"torch.distributed.algorithms",
234-
"torch.distributed.algorithms.ddp_comm_hooks",
235-
"torch.distributed.algorithms.model_averaging",
236-
"torch.distributed.elastic",
237-
"torch.distributed.elastic.utils",
238-
"torch.distributed.elastic.utils.data",
239-
"torch.distributed.launcher",
240-
"torch.distributed.nn",
241-
"torch.distributed.nn.api",
242-
"torch.distributed.nn.jit",
243-
"torch.distributed.nn.jit.templates",
244-
"torch.distributed.pipeline",
245-
"torch.distributed.pipeline.sync",
246-
"torch.distributed.pipeline.sync.skip",
247-
"torch.fft",
248-
"torch.for_onnx",
249-
"torch.fx.experimental",
250-
"torch.fx.experimental.unification",
251-
"torch.fx.experimental.unification.multipledispatch",
252-
"torch.fx.passes",
253-
"torch.jit.mobile",
254-
"torch.nn",
255-
"torch.nn.backends",
256-
"torch.nn.intrinsic",
257-
"torch.nn.intrinsic.modules",
258-
"torch.nn.intrinsic.qat",
259-
"torch.nn.intrinsic.qat.modules",
260-
"torch.nn.intrinsic.quantized",
261-
"torch.nn.intrinsic.quantized.dynamic",
262-
"torch.nn.intrinsic.quantized.dynamic.modules",
263-
"torch.nn.intrinsic.quantized.modules",
264-
"torch.nn.modules",
265-
"torch.nn.parallel",
266-
"torch.nn.qat",
267-
"torch.nn.qat.modules",
268-
"torch.nn.qat.dynamic",
269-
"torch.nn.qat.dynamic.modules",
270-
"torch.nn.quantizable",
271-
"torch.nn.quantizable.modules",
272-
"torch.nn.quantized",
273-
"torch.nn.quantized.dynamic",
274-
"torch.nn.quantized.dynamic.modules",
275-
"torch.nn.quantized.modules",
276-
"torch.nn.utils",
277-
"torch.package",
278-
"torch.package.analyze",
279-
"torch.quantization",
280-
"torch.quantization.fx",
281-
"torch.sparse",
282-
"torch.special",
283-
"torch.utils",
284-
"torch.utils.backcompat",
285-
"torch.utils.benchmark.examples",
286-
"torch.utils.benchmark.op_fuzzers",
287-
"torch.utils.benchmark.utils",
288-
"torch.utils.benchmark.utils.valgrind_wrapper",
289-
"torch.utils.bottleneck",
290-
"torch.utils.data.communication",
291-
"torch.utils.data.datapipes",
292-
"torch.utils.data.datapipes.dataframe",
293-
"torch.utils.data.datapipes.iter",
294-
"torch.utils.data.datapipes.map",
295-
"torch.utils.data.datapipes.utils",
296-
"torch.utils.ffi",
297-
"torch.utils.hipify",
298-
"torch.utils.model_dump",
299-
"torch.utils.tensorboard",
300-
]
301-
302-
303253
# The suffix(es) of source filenames.
304254
# You can specify multiple suffix as a list of string:
305255
#
@@ -417,6 +367,11 @@ def coverage_post_process(app, exception):
417367
if not isinstance(app.builder, CoverageBuilder):
418368
return
419369

370+
if not torch.distributed.is_available():
371+
raise RuntimeError("The coverage tool cannot run with a version "
372+
"of PyTorch that was built with USE_DISTRIBUTED=0 "
373+
"as this module's API changes.")
374+
420375
# These are all the modules that have "automodule" in an rst file
421376
# These modules are the ones for which coverage is checked
422377
# Here, we make sure that no module is missing from that list
@@ -443,26 +398,16 @@ def is_not_internal(modname):
443398
if modname not in modules:
444399
missing.add(modname)
445400

446-
expected = set(coverage_missing_automodule)
447-
448401
output = []
449402

450-
unexpected_missing = missing - expected
451-
if unexpected_missing:
452-
mods = ", ".join(unexpected_missing)
403+
if missing:
404+
mods = ", ".join(missing)
453405
output.append(f"\nYou added the following module(s) to the PyTorch namespace '{mods}' "
454406
"but they have no corresponding entry in a doc .rst file. You should "
455407
"either make sure that the .rst file that contains the module's documentation "
456408
"properly contains either '.. automodule:: mod_name' (if you do not want "
457-
"the paragraph added by the automodule, you can simply use py:module) or "
458-
"make the module private (by appending an '_' at the beginning of its name.")
459-
460-
unexpected_not_missing = expected - missing
461-
if unexpected_not_missing:
462-
mods = ", ".join(unexpected_not_missing)
463-
output.append(f"\nThank you for adding the missing .rst entries for '{mods}', please update "
464-
"the 'coverage_missing_automodule' in 'torch/docs/source/conf.py' to remove "
465-
"the module(s) you fixed and make sure we do not regress on this in the future.")
409+
"the paragraph added by the automodule, you can simply use '.. py:module:: mod_name') "
410+
" or make the module private (by appending an '_' at the beginning of its name).")
466411

467412
# The output file is hard-coded by the coverage tool
468413
# Our CI is setup to fail if any line is added to this file

docs/source/data.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -432,3 +432,15 @@ Example::
432432
.. autoclass:: torch.utils.data.WeightedRandomSampler
433433
.. autoclass:: torch.utils.data.BatchSampler
434434
.. autoclass:: torch.utils.data.distributed.DistributedSampler
435+
436+
437+
.. This module is experimental and should be private, adding it here for now
438+
.. py:module:: torch.utils.data.communication
439+
440+
.. These modules are documented as part of torch/data listing them here for
441+
.. now until we have a clearer fix
442+
.. py:module:: torch.utils.data.datapipes
443+
.. py:module:: torch.utils.data.datapipes.dataframe
444+
.. py:module:: torch.utils.data.datapipes.iter
445+
.. py:module:: torch.utils.data.datapipes.map
446+
.. py:module:: torch.utils.data.datapipes.utils

docs/source/distributed.rst

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -808,3 +808,21 @@ following matrix shows how the log level can be adjusted via the combination of
808808
+-------------------------+-----------------------------+------------------------+
809809
| ``INFO`` | ``DETAIL`` | Trace (a.k.a. All) |
810810
+-------------------------+-----------------------------+------------------------+
811+
812+
813+
.. Distributed modules that are missing specific entries.
814+
.. Adding them here for tracking purposes until they are more permanently fixed.
815+
.. py:module:: torch.distributed.algorithms
816+
.. py:module:: torch.distributed.algorithms.ddp_comm_hooks
817+
.. py:module:: torch.distributed.algorithms.model_averaging
818+
.. py:module:: torch.distributed.elastic
819+
.. py:module:: torch.distributed.elastic.utils
820+
.. py:module:: torch.distributed.elastic.utils.data
821+
.. py:module:: torch.distributed.launcher
822+
.. py:module:: torch.distributed.nn
823+
.. py:module:: torch.distributed.nn.api
824+
.. py:module:: torch.distributed.nn.jit
825+
.. py:module:: torch.distributed.nn.jit.templates
826+
.. py:module:: torch.distributed.pipeline
827+
.. py:module:: torch.distributed.pipeline.sync
828+
.. py:module:: torch.distributed.pipeline.sync.skip

docs/source/fft.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,6 @@ torch.fft
77
Discrete Fourier transforms and related functions.
88

99
.. automodule:: torch.fft
10-
:noindex:
11-
1210
.. currentmodule:: torch.fft
1311

1412
Fast Fourier Transforms

docs/source/fx.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1109,3 +1109,12 @@ API Reference
11091109
:members:
11101110

11111111
.. autofunction:: torch.fx.replace_pattern
1112+
1113+
1114+
.. The experimental and passes submodules are missing docs.
1115+
.. Adding it here for coverage but this doesn't add anything to the
1116+
.. rendered doc.
1117+
.. py:module:: torch.fx.passes
1118+
.. py:module:: torch.fx.experimental
1119+
.. py:module:: torch.fx.experimental.unification
1120+
.. py:module:: torch.fx.experimental.unification.multipledispatch

docs/source/jit.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -878,3 +878,7 @@ References
878878

879879
jit_python_reference
880880
jit_unsupported
881+
882+
.. This package is missing doc. Adding it here for coverage
883+
.. This does not add anything to the rendered page.
884+
.. py:module:: torch.jit.mobile

0 commit comments

Comments
 (0)