Skip to content

Commit

Permalink
Merge pull request #359 from rsokl/develop
Browse files Browse the repository at this point in the history
MyGrad 2.0 pre release 🥳🥳🥳
  • Loading branch information
rsokl authored Mar 3, 2021
2 parents 9a9b6f6 + e8ab7a2 commit 82b31e2
Show file tree
Hide file tree
Showing 195 changed files with 15,827 additions and 4,610 deletions.
43 changes: 42 additions & 1 deletion .github/workflows/tox_run.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
strategy:
max-parallel: 3
matrix:
python-version: [3.6, 3.7, 3.8]
python-version: [3.7, 3.8]
fail-fast: false

steps:
Expand Down Expand Up @@ -55,3 +55,44 @@ jobs:
pip install tox
- name: Measure coverage
run: tox -e coverage


py39:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9"]
fail-fast: false

steps:
- uses: actions/checkout@v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install tox
- name: Python 3.9
run: tox -e py39

minimum_numpy:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7"]
fail-fast: false

steps:
- uses: actions/checkout@v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install tox
- name: Test vs Minimum Python/Numpy
run: tox -e min_numpy
8 changes: 0 additions & 8 deletions .isort.cfg

This file was deleted.

4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
[![Tested with Hypothesis](https://img.shields.io/badge/hypothesis-tested-brightgreen.svg)](https://hypothesis.readthedocs.io/)
[![codecov](https://codecov.io/gh/rsokl/MyGrad/branch/master/graph/badge.svg)](https://codecov.io/gh/rsokl/MyGrad)
[![Documentation Status](https://readthedocs.org/projects/mygrad/badge/?version=latest)](https://mygrad.readthedocs.io/en/latest/?badge=latest)
[![Build Status](https://travis-ci.com/rsokl/MyGrad.svg?branch=master)](https://travis-ci.com/rsokl/MyGrad)
[![Automated tests status](https://github.com/rsokl/MyGrad/workflows/Tests/badge.svg)](https://github.com/rsokl/MyGrad/actions?query=workflow%3ATests+branch%3Amaster)
[![PyPi version](https://img.shields.io/pypi/v/mygrad.svg)](https://pypi.python.org/pypi/mygrad)
![Python version support](https://img.shields.io/badge/python-3.6%20‐%203.8-blue.svg)
![Python version support](https://img.shields.io/badge/python-3.6%20‐%203.9-blue.svg)

# [MyGrad's Documentation](https://mygrad.readthedocs.io/en/latest/)

Expand Down
3 changes: 2 additions & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@ numba==0.51.2
llvmlite==0.34.0
sphinx==3.0.4
numpydoc>=1.0.0
sphinx-rtd-theme==0.5.0
sphinx-rtd-theme==0.5.0
matplotlib>=3.0.0
Binary file added docs/source/_static/meerkat.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
441 changes: 441 additions & 0 deletions docs/source/changes.rst

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
import mygrad
from mygrad import linalg

# -*- coding: utf-8 -*-
#
Expand All @@ -23,7 +22,7 @@
# -- Project information -----------------------------------------------------

project = "MyGrad"
copyright = "2018, Ryan Soklaski"
copyright = "2021, Ryan Soklaski"
author = "Ryan Soklaski"

# The short X.Y version
Expand All @@ -49,6 +48,7 @@
"sphinx.ext.viewcode",
"sphinx.ext.githubpages",
"sphinx.ext.autosummary",
"matplotlib.sphinxext.plot_directive",
"numpydoc",
]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,14 @@ mygrad.Tensor

.. autoclass:: Tensor


.. automethod:: __init__


.. rubric:: Methods

.. autosummary::

~Tensor.__init__
~Tensor.argmax
~Tensor.argmin
Expand All @@ -35,15 +35,15 @@ mygrad.Tensor
~Tensor.swapaxes
~Tensor.transpose
~Tensor.var









.. rubric:: Attributes

.. autosummary::

~Tensor.T
~Tensor.constant
~Tensor.creator
Expand All @@ -52,5 +52,3 @@ mygrad.Tensor
~Tensor.scalar_only
~Tensor.shape
~Tensor.size


6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.Tensor.base.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.Tensor.base
==================

.. currentmodule:: mygrad

.. automethod:: Tensor.base
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.Tensor.grad.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.Tensor.grad
==================

.. currentmodule:: mygrad

.. autoattribute:: Tensor.grad
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.Tensor.null_grad.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.Tensor.null\_grad
========================

.. currentmodule:: mygrad

.. automethod:: Tensor.null_grad
6 changes: 0 additions & 6 deletions docs/source/generated/mygrad.Tensor.scalar_only.rst

This file was deleted.

6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.mem_guard_off.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.mem_guard_off
====================

.. currentmodule:: mygrad

.. autofunction:: mem_guard_off
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.mem_guard_on.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.mem_guard_on
===================

.. currentmodule:: mygrad

.. autofunction:: mem_guard_on
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.no_autodiff.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.no_autodiff
==================

.. currentmodule:: mygrad

.. autofunction:: no_autodiff
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.tensor.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.tensor
=============

.. currentmodule:: mygrad

.. autofunction:: tensor
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.turn_memory_guarding_off.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.turn_memory_guarding_off
===============================

.. currentmodule:: mygrad

.. autofunction:: turn_memory_guarding_off
6 changes: 6 additions & 0 deletions docs/source/generated/mygrad.turn_memory_guarding_on.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mygrad.turn_memory_guarding_on
==============================

.. currentmodule:: mygrad

.. autofunction:: turn_memory_guarding_on
32 changes: 30 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,34 @@ MyGrad is a simple, NumPy-centric math library that is capable of performing *au
mathematical functions provided by MyGrad are capable of computing their own derivatives. If you know `how to use NumPy
<https://www.pythonlikeyoumeanit.com/module_3.html>`_ then you can learn how to use MyGrad in a matter of minutes!

While fantastic auto-differentiation libraries like TensorFlow, PyTorch, and MXNet are available to the same end as
MyGrad (and far beyond, ultimately), they are industrial-grade tools in both function and form. MyGrad's primary purpose
Let's use ``mygrad`` to compute the derivative of
:math:`f(x) = x^2` evaluated at :math:`x = 3` (which is :math:`\frac{df}{dx}\rvert_{x=3} = 2\times 3`).

:class:`~mygrad.Tensor` behaves nearly identically to NumPy's ndarray, in addition to having the machinery needed to
compute the analytic derivatives of functions. Suppose we want to compute this derivative at ``x = 3``. We can create a
0-dimensional tensor (a scalar) for x and compute ``f(x)``:

.. code:: pycon
>>> import mygrad as mg
>>> x = mg.Tensor(3.0)
>>> f = x ** 2
>>> f
Tensor(9.0)
Invoking :meth:`~mygrad.Tensor.backward` on ``f`` instructs ``mygrad`` to trace through the computational graph that produced ``f`` and compute the
derivatives of ``f`` with respect to all of its independent variables. Thus, executing ``f.backward()`` will compute :math:`\frac{df}{dx} = 2x` at :math:`x=3`, and will store the resulting value in ``x.grad``:

.. code:: pycon
>>> f.backward() # triggers computation of ``df/dx``
>>> x.grad # df/dx = 2x = 6.0
array(6.0)
While fantastic auto-differentiation libraries like TensorFlow, PyTorch, and JAX are available to the same end as
MyGrad (and far far beyond, ultimately), they are industrial-grade tools in both function and form. MyGrad's primary purpose
is to serve as an educational tool. It is simple to install (its only core dependency in NumPy), it is trivial to use
if you are comfortable with NumPy, and its code base is well-documented and easy to understand. This makes it simple for
students and teachers alike to use, hack, prototype with, and enhance MyGrad!
Expand Down Expand Up @@ -63,6 +89,8 @@ than is suggested here.
install
intro
tensor
views
performance_tips
operation
tensor_creation
tensor_manipulation
Expand Down
7 changes: 7 additions & 0 deletions docs/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,10 @@ navigate to the MyGrad directory, then run:
pip install .
Support for Python and NumPy
----------------------------
MyGrad abides by the `NEP 29 <https://numpy.org/neps/nep-0029-deprecation_policy.html>`_ recommendation, and adopts
a common “time window-based” policy for support of Python and NumPy versions.

Accordingly, MyGrad's drop schedule for Python and NumPy can be found `here <https://numpy.org/neps/nep-0029-deprecation_policy.html#drop-schedule>`_.
10 changes: 5 additions & 5 deletions docs/source/intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ compute the analytic derivatives of functions. Suppose we want to compute this d
.. code:: pycon
>>> import mygrad as mg
>>> x = mg.Tensor(3.0)
>>> x = mg.tensor(3.0)
>>> f = x ** 2
>>> f
Tensor(9.0)
Expand Down Expand Up @@ -62,7 +62,7 @@ Some Bells and Whistles
.. code:: pycon
>>> import numpy as np
>>> x = mg.Tensor([2.0, 2.0, 2.0])
>>> x = mg.tensor([2.0, 2.0, 2.0])
>>> y = np.array([1.0, 2.0, 3.0])
>>> f = x ** y # (2 ** 1, 2 ** 2, 2 ** 3)
>>> f.backward()
Expand All @@ -88,7 +88,7 @@ Advanced Example

The following is an example of using `mygrad` to compute the `hinge loss <https://en.wikipedia.org/wiki/Hinge_loss>`_ of classification scores and to "back-propagate" through (compute the gradient of) this loss. This example demonstrates some of mygrad's ability to perform back-propagation through broadcasted operations, basic indexing, advanced indexing, and in-place assignments.

.. code::
.. code:: pycon
>>> from mygrad import Tensor
>>> import numpy as np
Expand All @@ -115,8 +115,8 @@ MyGrad provides the capability to visually render diagrams of your computational
import mygrad as mg
from mygrad.computational_graph import build_graph
x = mg.Tensor(2)
y = mg.Tensor(3)
x = mg.tensor(2)
y = mg.tensor(3)
f = x * y
g = f + x - 2
Expand Down
33 changes: 0 additions & 33 deletions docs/source/operation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,39 +18,6 @@ node then back-propagates to any Operation-instance that is recorded
as its creator, and so on.


Explaining Scalar-Only Operations
----------------------------------
MyGrad only supports gradients whose elements have a one-to-one correspondence
with the elements of their associated tensors. That is, if ``x`` is a shape-(4,)
tensor:

.. math::
x = [x_0, x_1, x_2, x_3]
then the gradient, with respect to ``x``, of the terminal node of our computational graph (:math:`l`) must
be representable as a shape-(4,) tensor whose elements correspond to those of ``x``:

.. math::
\nabla_{x}{l} = [\frac{dl}{dx_0}, \frac{dl}{dx_1}, \frac{dl}{dx_2}, \frac{dl}{dx_3}]
If an operation class has ``scalar_only=True``, then the terminal node of a
computational graph involving that operation can only trigger back-propagation
from a 0-dimensional tensor (i.e. a scalar). This is ``False`` for operations that
manifest as trivial element-wise operations over tensors. In such cases, the
gradient of the operation can also be treated element-wise, and thus be computed
unambiguously.

The matrix-multiplication operation, for example, is a scalar-only operation because
computing the derivative of :math:`F_{ik} = \sum_{j}{A_{ij} B_{jk}}` with respect
to each element of :math:`A` produces a 3-tensor: :math:`\frac{d F_{ik}}{d A_{iq}}`, since each element
of :math:`F` depends on *every* element in the corresponding row of :math:`A`.
This is the case unless the terminal node of this graph is eventually reduced (via summation, for instance) to a
scalar, :math:`l`, in which case the elements of the 2-tensor :math:`\frac{dl}{dA_{pq}}` has a trivial one-to-one
correspondence to the elements of :math:`A_{pq}`.


Documentation for mygrad.Operation
----------------------------------

Expand Down
Loading

0 comments on commit 82b31e2

Please sign in to comment.