Skip to content

Commit

Permalink
Update at 2024-06-05 00:00:11.094250
Browse files Browse the repository at this point in the history
  • Loading branch information
mlc-bot committed Jun 5, 2024
1 parent cf4a625 commit aa9aa4b
Show file tree
Hide file tree
Showing 32 changed files with 2,152 additions and 1,312 deletions.
Binary file modified _images/output_index_1f4d27_59_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/output_index_4f28a7_60_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/output_index_e26dde_40_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/output_index_e758e2_5_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1,080 changes: 913 additions & 167 deletions _sources/chapter_auto_program_optimization/index.rst.txt

Large diffs are not rendered by default.

28 changes: 15 additions & 13 deletions _sources/chapter_end_to_end/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ We can plot out the image instance that we want to be able to predict.
.. parsed-literal::
:class: output
Class: Sandal
Class: T-shirt/top
Download Model Parameters
Expand Down Expand Up @@ -191,10 +191,10 @@ Let us begin by reviewing a Numpy implementation of the model.
.. parsed-literal::
:class: output
[[-26.194868 -34.8678 -24.12358 -20.510454 -18.109476 12.921284
-17.477413 -4.9815836 -7.2966995 -6.3170166]]
[5]
NumPy-MLP Prediction: Sandal
[[ 16.492514 -9.319946 -1.3977761 -7.6737294 -19.890661 -19.210732
7.6732693 -26.277058 -10.610296 -29.9112 ]]
[0]
NumPy-MLP Prediction: T-shirt/top
The above example code shows the high-level array operations to perform
Expand Down Expand Up @@ -282,7 +282,7 @@ model.
.. parsed-literal::
:class: output
Low-level Numpy MLP Prediction: Sandal
Low-level Numpy MLP Prediction: T-shirt/top
Constructing an End to End IRModule in TVMScript
Expand Down Expand Up @@ -528,7 +528,7 @@ the low-level numpy execution code as:
.. parsed-literal::
:class: output
Low-level Numpy with CallTIR Prediction: Sandal
Low-level Numpy with CallTIR Prediction: T-shirt/top
In practice, the lowest-level implementation will have explicit memory
Expand Down Expand Up @@ -646,6 +646,7 @@ have.
.output_html .cs { color: #3D7B7B; font-style: italic } /* Comment.Special */
.output_html .gd { color: #A00000 } /* Generic.Deleted */
.output_html .ge { font-style: italic } /* Generic.Emph */
.output_html .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */
.output_html .gr { color: #E40000 } /* Generic.Error */
.output_html .gh { color: #000080; font-weight: bold } /* Generic.Heading */
.output_html .gi { color: #008400 } /* Generic.Inserted */
Expand Down Expand Up @@ -837,8 +838,8 @@ weights.
.. parsed-literal::
:class: output
[[-26.19487 -34.867805 -24.123579 -20.510454 -18.109478 12.921284
-17.47741 -4.981583 -7.2966967 -6.3170156]]
[[ 16.49251 -9.319946 -1.3977766 -7.673729 -19.890663 -19.210732
7.67327 -26.277052 -10.610297 -29.911194 ]]
The main function returns the prediction result, and we can then call
Expand All @@ -862,7 +863,7 @@ the class label.
.. parsed-literal::
:class: output
MyModule Prediction: Sandal
MyModule Prediction: T-shirt/top
Integrate Existing Libraries in the Environment
Expand Down Expand Up @@ -993,7 +994,7 @@ that we get the same result.
.. parsed-literal::
:class: output
MyModuleWithExternCall Prediction: Sandal
MyModuleWithExternCall Prediction: T-shirt/top
Mixing TensorIR Code and Libraries
Expand Down Expand Up @@ -1077,7 +1078,7 @@ functions. We can build and run to validate the result.
.. parsed-literal::
:class: output
MyModuleMixture Prediction: Sandal
MyModuleMixture Prediction: T-shirt/top
Bind Parameters to IRModule
Expand Down Expand Up @@ -1122,6 +1123,7 @@ nd_params.
.output_html .cs { color: #3D7B7B; font-style: italic } /* Comment.Special */
.output_html .gd { color: #A00000 } /* Generic.Deleted */
.output_html .ge { font-style: italic } /* Generic.Emph */
.output_html .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */
.output_html .gr { color: #E40000 } /* Generic.Error */
.output_html .gh { color: #000080; font-weight: bold } /* Generic.Heading */
.output_html .gi { color: #008400 } /* Generic.Inserted */
Expand Down Expand Up @@ -1250,7 +1252,7 @@ the input data.
.. parsed-literal::
:class: output
MyModuleWithParams Prediction: Sandal
MyModuleWithParams Prediction: T-shirt/top
Discussions
Expand Down
106 changes: 7 additions & 99 deletions _sources/chapter_gpu_acceleration/part1.rst.txt

Large diffs are not rendered by default.

105 changes: 7 additions & 98 deletions _sources/chapter_gpu_acceleration/part2.rst.txt

Large diffs are not rendered by default.

229 changes: 75 additions & 154 deletions _sources/chapter_graph_optimization/index.rst.txt

Large diffs are not rendered by default.

225 changes: 54 additions & 171 deletions _sources/chapter_integration/index.rst.txt

Large diffs are not rendered by default.

101 changes: 55 additions & 46 deletions _sources/chapter_tensor_program/case_study.rst.txt

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions _sources/chapter_tensor_program/tensorir_exercises.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@ computation abstraction (e.g., ``ndarray + ndarray``) to low-level
python implementation (standard for loops with element access and
operation)

Notably, the initial value of the o utput array (or buffer) is not
always ``0``. We need to write or initialize it in our implementation,
which is important for reduction operator (e.g. matmul and conv)
Notably, the initial value of the output array (or buffer) is not always
``0``. We need to write or initialize it in our implementation, which is
important for reduction operator (e.g. matmul and conv)

.. raw:: latex

Expand Down Expand Up @@ -315,8 +315,8 @@ Parallel, Vectorize and Unroll
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

First, we introduce some new primitives, ``parallel``, ``vectorize`` and
``unroll``. These three primitives operates on loops to indicate how
this loop execute. Here is the example:
``unroll``. These three primitives operate on loops to indicate how this
loop executes. Here is the example:

.. raw:: latex

Expand Down Expand Up @@ -370,6 +370,7 @@ this loop execute. Here is the example:
.output_html .cs { color: #3D7B7B; font-style: italic } /* Comment.Special */
.output_html .gd { color: #A00000 } /* Generic.Deleted */
.output_html .ge { font-style: italic } /* Generic.Emph */
.output_html .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */
.output_html .gr { color: #E40000 } /* Generic.Error */
.output_html .gh { color: #000080; font-weight: bold } /* Generic.Heading */
.output_html .gi { color: #008400 } /* Generic.Inserted */
Expand Down Expand Up @@ -431,7 +432,6 @@ this loop execute. Here is the example:
<span class="k">class</span> <span class="nc">Module</span><span class="p">:</span>
<span class="nd">@T</span><span class="o">.</span><span class="n">prim_func</span>
<span class="k">def</span> <span class="nf">add</span><span class="p">(</span><span class="n">A</span><span class="p">:</span> <span class="n">T</span><span class="o">.</span><span class="n">Buffer</span><span class="p">((</span><span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">),</span> <span class="s2">&quot;int64&quot;</span><span class="p">),</span> <span class="n">B</span><span class="p">:</span> <span class="n">T</span><span class="o">.</span><span class="n">Buffer</span><span class="p">((</span><span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">),</span> <span class="s2">&quot;int64&quot;</span><span class="p">),</span> <span class="n">C</span><span class="p">:</span> <span class="n">T</span><span class="o">.</span><span class="n">Buffer</span><span class="p">((</span><span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">),</span> <span class="s2">&quot;int64&quot;</span><span class="p">)):</span>
<span class="n">T</span><span class="o">.</span><span class="n">func_attr</span><span class="p">({</span><span class="s2">&quot;global_symbol&quot;</span><span class="p">:</span> <span class="s2">&quot;add&quot;</span><span class="p">})</span>
<span class="c1"># with T.block(&quot;root&quot;):</span>
<span class="k">for</span> <span class="n">i_0</span> <span class="ow">in</span> <span class="n">T</span><span class="o">.</span><span class="n">parallel</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span>
<span class="k">for</span> <span class="n">i_1</span> <span class="ow">in</span> <span class="n">T</span><span class="o">.</span><span class="n">unroll</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span>
Expand Down
Binary file added _static/mlc-favicon.ico
Binary file not shown.
Binary file removed _static/mlc-favicon.png
Binary file not shown.
1 change: 1 addition & 0 deletions _static/pygments.css
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ span.linenos.special { color: #000000; background-color: #ffffc0; padding-left:
.highlight .cs { color: #60a0b0; background-color: #fff0f0 } /* Comment.Special */
.highlight .gd { color: #A00000 } /* Generic.Deleted */
.highlight .ge { font-style: italic } /* Generic.Emph */
.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */
.highlight .gr { color: #FF0000 } /* Generic.Error */
.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */
.highlight .gi { color: #00A000 } /* Generic.Inserted */
Expand Down
Loading

0 comments on commit aa9aa4b

Please sign in to comment.