Skip to content

Commit

Permalink
[gh-pages] Restructure pages
Browse files Browse the repository at this point in the history
  • Loading branch information
auphelia committed Jan 31, 2024
1 parent 6234223 commit 9453683
Show file tree
Hide file tree
Showing 5 changed files with 60 additions and 40 deletions.
1 change: 1 addition & 0 deletions docs/_layouts/default.html
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
<li class="download"><a class="buttons" href="{{ site.github.tar_url }}">Download TAR</a></li>
{% endif %}
<li class="download"><a class="buttons" style="background: none" href="https://xilinx.github.io/finn/about">About</a></li>
<li class="download"><a class="buttons" style="background: none" href="https://xilinx.github.io/finn/quickstart">Quickstart</a></li>
<li class="download"><a class="buttons" style="background: none" href="https://github.com/Xilinx/finn/discussions/categories/announcements">Announcements</a></li>
<li class="download"><a class="buttons" style="background: none" href="https://xilinx.github.io/finn/publications">Publications</a></li>
<li class="download"><a class="buttons" style="background: none" href="https://xilinx.github.io/finn/events">Events</a></li>
Expand Down
29 changes: 0 additions & 29 deletions docs/about.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,3 @@
## What is FINN?

<img src="img/finn-example.png" alt="drawing" width="400"/>

FINN is an ML framework by the Integrated Communications and AI Lab of AMD Research & Advanced Development.
It provides an end-to-end flow for the exploration and implementation of quantized neural network inference solutions on FPGAs.
FINN generates dataflow architectures as a physical representation of the implemented custom network in space.
It is not a generic DNN acceleration solution but relies on co-design and design space exploration for quantization and parallelization tuning so as to optimize a solutions with respect to resource and performance requirements.
<br>
## Features

* **Templated Vitis HLS library of streaming components:** FINN comes with an
HLS hardware library that implements convolutional, fully-connected, pooling and
LSTM layer types as streaming components. The library uses C++ templates to
support a wide range of precisions.
* **Ultra low-latency and high performance
with dataflow:** By composing streaming components for each layer, FINN can
generate accelerators that can classify images at sub-microsecond latency.
* **Many end-to-end example designs:** We provide examples that start from training a
quantized neural network, all the way down to an accelerated design running on
hardware. The examples span a range of datasets and network topologies.
* **Toolflow for rapid design generation:** The FINN toolflow supports allocating
separate compute resources per layer, either automatically or manually, and
generating the full design for synthesis. This enables rapid exploration of the
design space.

## Who are we?

The FINN team consists of members of AMD Research under Ralph Wittig (AMD Research & Advanced Development) and members of Custom & Strategic Engineering under Allen Chen, working very closely with the Pynq team.
Expand All @@ -37,6 +11,3 @@ Thomas Preusser, Jakoba Petri-Koenig, Ken O’Brien

From top left to bottom right: Eamonn Dunbar, Kasper Feurer, Aziz Bahri, John Monks, Mirza Mrahorovic




Binary file modified docs/img/finn-example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
40 changes: 29 additions & 11 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# FINN
<img align="left" src="img/finn-stack.PNG" alt="drawing" style="margin-right: 20px" width="300"/>
<img src="img/finn-example.png" alt="drawing" width="400"/>

FINN is a machine learning framework by the Integrated Communications and AI Lab of AMD Research & Advanced Development.
It provides an end-to-end flow for the exploration and implementation of quantized neural network inference solutions on FPGAs.
Expand All @@ -9,15 +9,33 @@ It is not a generic DNN acceleration solution but relies on co-design and design
<br><br>
The FINN compiler is under active development <a href="https://github.com/Xilinx/finn">on GitHub</a>, and we welcome contributions from the community!

## Quickstart
<br>
## Features

Depending on what you would like to do, we have different suggestions on where to get started:
* **Templated Vitis HLS library of streaming components:** FINN comes with an
HLS hardware library that implements convolutional, fully-connected, pooling and
LSTM layer types as streaming components. The library uses C++ templates to
support a wide range of precisions.
* **Ultra low-latency and high performance
with dataflow:** By composing streaming components for each layer, FINN can
generate accelerators that can classify images at sub-microsecond latency.
* **Many end-to-end example designs:** We provide examples that start from training a
quantized neural network, all the way down to an accelerated design running on
hardware. The examples span a range of datasets and network topologies.
* **Toolflow for rapid design generation:** The FINN toolflow supports allocating
separate compute resources per layer, either automatically or manually, and
generating the full design for synthesis. This enables rapid exploration of the
design space.

* **I want to try out prebuilt QNN accelerators on my FPGA board.** Head over to [finn-examples](https://github.com/Xilinx/finn-examples)
to try out some FPGA accelerators built with the FINN compiler. We have more examples in the [BNN-PYNQ](https://github.com/Xilinx/BNN-PYNQ)
and the [LSTM-PYNQ](https://github.com/Xilinx/LSTM-PYNQ) repos, although these are not built with the FINN compiler.
* **I want to train new quantized networks for FINN.** Check out <a href="https://github.com/Xilinx/brevitas">Brevitas</a>,
our PyTorch library for quantization-aware training.
* **I want to understand the computations involved in quantized inference.** Check out these Jupyter notebooks on <a href="https://github.com/maltanar/qnn-inference-examples">QNN inference</a>. This repo contains simple Numpy/Python layer implementations and a few pretrained QNNs for instructive purposes.
* **I want to understand how it all fits together.** Check out our [publications](publications.md),
particularly the <a href="https://arxiv.org/abs/1612.07119" target="_blank">FINN paper at FPGA'17</a> and the <a href="https://arxiv.org/abs/1809.04570" target="_blank">FINN-R paper in ACM TRETS</a>.
## Who are we?

The FINN team consists of members of AMD Research under Ralph Wittig (AMD Research & Advanced Development) and members of Custom & Strategic Engineering under Allen Chen, working very closely with the Pynq team.

<img src="img/finn-team.png" alt="The FINN Team (AMD Research and Advanced Development)" width="400"/>

From top left to bottom right: Yaman Umuroglu, Michaela Blott, Alessandro Pappalardo, Lucian Petrica, Nicholas Fraser,
Thomas Preusser, Jakoba Petri-Koenig, Ken O’Brien

<img src="img/finn-team1.png" alt="The FINN Team (Custom & Strategic Engineering)" width="400"/>

From top left to bottom right: Eamonn Dunbar, Kasper Feurer, Aziz Bahri, John Monks, Mirza Mrahorovic
30 changes: 30 additions & 0 deletions docs/quickstart.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
## Quickstart

<img align="left" src="img/finn-stack.PNG" alt="drawing" style="margin-right: 20px" width="300"/>


### Repo links
Depending on what you would like to do, we have different suggestions on where to get started:

* **I want to try out prebuilt QNN accelerators on my FPGA board.** Head over to [finn-examples](https://github.com/Xilinx/finn-examples)
to try out some FPGA accelerators built with the FINN compiler.
* **I want to train new quantized networks for FINN.** Check out <a href="https://github.com/Xilinx/brevitas">Brevitas</a>,
our PyTorch library for quantization-aware training.
* **I want to work with the FPGA building blocks for neural networks directly** Check out the [HLS](https://github.com/Xilinx/finn-hlslib) and [RTL](https://github.com/Xilinx/finn/tree/main/finn-rtllib) library of FINN.
* **I want to understand how it all fits together.** Check out the <a href="https://github.com/Xilinx/finn">FINN compiler</a>.


### Introduction videos & articles
* [Video tutorial @FPGA 2021](https://www.youtube.com/watch?v=zw2aG4PhzmA)
* [FINN paper](https://arxiv.org/pdf/1612.07119.pdf)
* [FINN-R paper](https://arxiv.org/pdf/1809.04570.pdf) - this is what the current tool-flow is mostly based on
* Brevitas tutorial @TVMCon2021: [Video](https://www.youtube.com/watch?v=wsXx3Hr5kZs) and [Jupyter notebook](https://github.com/Xilinx/brevitas/blob/master/notebooks/Brevitas_TVMCon2021.ipynb)

### More in depth material
* [FINN documentation](https://finn.readthedocs.io/en/latest/)
* [Brevitas documentation](https://xilinx.github.io/brevitas/)
* [FINN installation instructions](https://finn.readthedocs.io/en/latest/getting_started.html)
* [Full stack tutorial for training an MLP with Brevitas + deploying with FINN](https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example/cybersecurity)
* [Tutorial showing detailed compilation steps for a trained fully-connected network](https://github.com/Xilinx/finn/blob/main/notebooks/end2end_example/bnn-pynq/tfc_end2end_example.ipynb)
* [Building on the pervious tutorial, a tutorial showing detailed verification steps for a trained fully-connected network](https://github.com/Xilinx/finn/blob/main/notebooks/end2end_example/bnn-pynq/tfc_end2end_verification.ipynb)
* 2-part demonstrator as shown on youtube: [Part 1 - Toolflow](https://www.youtube.com/watch?v=z49tzp3CBoM) & [Part 2 - Hardware Performance Demo](https://www.youtube.com/watch?v=W35c5XmnlhA)

0 comments on commit 9453683

Please sign in to comment.