Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/adtzlr/tensortrax
Browse files Browse the repository at this point in the history
  • Loading branch information
adtzlr committed Jun 10, 2024
2 parents d884ac2 + 2325382 commit 408089e
Show file tree
Hide file tree
Showing 31 changed files with 151 additions and 1,027 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
- name: Test with tox
run: |
pip install tox
tox -- --cov tensortrax --cov-report xml --cov-report term
tox
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
if: ${{ matrix.python-version == '3.12' }}
Expand Down
4 changes: 2 additions & 2 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -632,7 +632,7 @@ state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

<one line to give the program's name and a brief idea of what it does.>
Copyright (C) 2022-2024 Andreas Dutzler
Copyright (C) 2023 Andreas Dutzler

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
Expand All @@ -652,7 +652,7 @@ Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

tensortrax Copyright (C) 2022-2024 Andreas Dutzler
tensortrax Copyright (C) 2023 Andreas Dutzler
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
Expand Down
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@
<p align="center">
<img src="https://github.com/adtzlr/tensortrax/assets/5793153/445eedc1-295a-4c1e-b3f9-6f037887dd86" height="65px"/>
<p align="center">Differentiable Tensors based on NumPy Arrays.</p>
<img src="https://github.com/adtzlr/tensortrax/assets/5793153/7dd2f76d-aa3c-494d-935c-bdd8e945c692" height="80px"/>
<p align="center">Math on (Hyper-Dual) Tensors with Trailing Axes.</p>
</p>

[![PyPI version shields.io](https://img.shields.io/pypi/v/tensortrax.svg)](https://pypi.python.org/pypi/tensortrax/) [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [![Documentation Status](https://readthedocs.org/projects/tensortrax/badge/?version=latest)](https://tensortrax.readthedocs.io/en/latest/?badge=latest) ![PyPI - Downloads](https://img.shields.io/pypi/dm/tensortrax) ![Codestyle black](https://img.shields.io/badge/code%20style-black-black) [![DOI](https://zenodo.org/badge/570708066.svg)](https://zenodo.org/badge/latestdoi/570708066) [![codecov](https://codecov.io/github/adtzlr/tensortrax/branch/main/graph/badge.svg?token=7DTH0HKYO9)](https://codecov.io/github/adtzlr/tensortrax)

# Highlights
- Write differentiable code with Tensors based on NumPy arrays
- Efficient evaluation of batches by elementwise-operating trailing axes
- Designed to operate on input arrays with (elementwise-operating) trailing axes
- Essential vector/tensor Hyper-Dual number math, including limited support for `einsum` (restricted to max. three operands)
- Math is limited but similar to NumPy, try to use `import tensortrax.math as tm` instead of `import numpy as np` inside functions to be differentiated
- Forward Mode Automatic Differentiation (AD) using Hyper-Dual Tensors, up to second order derivatives
Expand Down
Binary file removed docs/_static/logo.png
Binary file not shown.
195 changes: 0 additions & 195 deletions docs/_static/logo_inkscape.svg

This file was deleted.

Binary file removed docs/_static/logo_without_text.png
Binary file not shown.
Binary file removed docs/_static/logo_without_text_hires.png
Binary file not shown.
6 changes: 1 addition & 5 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,12 +117,8 @@
},
],
"use_edit_page_button": True,
"logo": {
"text": "tensortrax",
"image_light": "logo_without_text.png",
"image_dark": "logo_without_text.png",
},
}

html_context = {
"github_user": "adtzlr",
"github_repo": "tensortrax",
Expand Down
43 changes: 11 additions & 32 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,29 +1,16 @@
.. figure:: _static/logo.png
:align: center
tensortrax documentation
========================

Differentiable Tensors based on NumPy Arrays.

Documentation
=============

.. admonition:: Highlights

- Write differentiable code with Tensors based on NumPy arrays
- Efficient evaluation of batches by elementwise-operating trailing axes
- Essential vector/tensor Hyper-Dual number math, including limited support for ``einsum`` (restricted to max. three operands)
- Math is limited but similar to NumPy, try to use ``import tensortrax.math as tm`` instead of ``import numpy as np`` inside functions to be differentiated
- Forward Mode Automatic Differentiation (AD) using Hyper-Dual Tensors, up to second order derivatives
- Create functions in terms of Hyper-Dual Tensors
- Evaluate the function, the gradient (jacobian) and the hessian of scalar-valued functions or functionals on given input arrays
- Straight-forward definition of custom functions in variational-calculus notation
- Stable gradient and hessian of eigenvalues obtained from ``eigvalsh`` in case of repeated equal eigenvalues

Motivation
Highlights
----------
Gradient and hessian evaluations of functions or functionals based on tensor-valued input arguments are a fundamental repetitive and (error-prone) task in constitutive hyperelastic material formulations used in continuum mechanics of solid bodies. In the worst case, conceptual ideas are impossible to pursue because the required tensorial derivatives are not readily achievable. The Hyper-Dual number approach enables a generalized and systematic way to overcome this deficiency [2]_. Compared to existing Hyper-Dual Number libaries ([3]_, [4]_) which introduce a new (hyper-) dual ``dtype`` (treated as ``dtype=object`` in NumPy), ``tensortrax`` relies on its own ``Tensor`` class. This approach involves a re-definition of all essential math operations (and NumPy-functions), whereas the ``dtype``-approach supports most basic math operations out of the box. However, in ``tensortrax``, NumPy and all its underlying linear algebra functions operate on default data types (e.g. ``dtype=float``). This allows to support functions like ``np.einsum()``. Beside the differences concerning the underlying ``dtype``, ``tensortrax`` is formulated on easy-to-understand (tensorial) calculus of variation. Hence, gradient- and hessian-vector products are evaluated with very little overhead compared to analytic formulations.

.. important::
Please keep in mind that ``tensortrax`` is not imitating a 100% full-featured NumPy, e.g. like https://github.com/HIPS/autograd [1]_. No arbitrary-order gradients or gradients-of-gradients are supported. The capability is limited to first- and second order gradients of a given function. Also, ``tensortrax`` provides no support for ``dtype=complex`` and ``out``-keywords are not supported.
- Designed to operate on input arrays with (elementwise-operating) trailing axes
- Essential vector/tensor Hyper-Dual number math, including limited support for ``einsum`` (restricted to max. three operands)
- Math is limited but similar to NumPy, try to use ``import tensortrax.math as tm`` instead of ``import numpy as np`` inside functions to be differentiated
- Forward Mode Automatic Differentiation (AD) using Hyper-Dual Tensors, up to second order derivatives
- Create functions in terms of Hyper-Dual Tensors
- Evaluate the function, the gradient (jacobian) and the hessian of scalar-valued functions or functionals on given input arrays
- Straight-forward definition of custom functions in variational-calculus notation
- Stable gradient and hessian of eigenvalues obtained from ``eigvalsh`` in case of repeated equal eigenvalues

Installation
------------
Expand Down Expand Up @@ -62,7 +49,6 @@ To install optional dependencies as well, add ``[all]`` to the install command:
:caption: Contents:

examples/index
knowledge
tensortrax

License
Expand All @@ -76,13 +62,6 @@ This program is distributed in the hope that it will be useful, but WITHOUT ANY

You should have received a copy of the GNU General Public License along with this program. If not, see `<https://www.gnu.org/licenses/>`_.

References
----------
.. [1] D. Maclaurin, D. Duvenaud, M. Johnson and J. Townsend, *Autograd*. Online. Available: https://github.com/HIPS/autograd.
.. [2] J. Fike and J. Alonso, *The Development of Hyper-Dual Numbers for Exact Second-Derivative Calculations*, 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. American Institute of Aeronautics and Astronautics, Jan. 04, 2011, doi: `10.2514/6.2011-886 <https://doi.org/10.2514/6.2011-886>`_.
.. [3] P. Rehner and G. Bauer, *Application of Generalized (Hyper-) Dual Numbers in Equation of State Modeling*, Frontiers in Chemical Engineering, vol. 3, 2021. Available: https://github.com/itt-ustutt/num-dual.
.. [4] T. Oberbichler, *HyperJet*. Online. Available: http://github.com/oberbichler/HyperJet.
Indices and tables
==================

Expand Down
11 changes: 0 additions & 11 deletions docs/knowledge.rst

This file was deleted.

Loading

0 comments on commit 408089e

Please sign in to comment.