A neural network library for learning boolean-valued, discrete functions on GPUs with gradient descent.
The library is implemented in Python using the Flax and JAX frameworks.
Questions? Ask @Z80coder
Lossless hardening with ∂𝔹 nets. I. Wright. In "Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators", ICML 2023 Workshop, Honolulu, 2023.
Draft paper: "∂B nets: learning discrete functions by gradient descent" (April 2023).
Neural network research with the Wolfram language (30 mins).
∂B nets quick overview (30 mins).
∂B nets overview (1 hour).
The working prototype was implemented in Wolfram. The demos below were snapshots of work-in-progress.
- Neural logic nets (15m)
- The Soft-NOT operator (10m)
- The Soft-AND operator (10m)
- The differentiable Hard-AND operator (17m)
- The differentiable Hard-OR operator (5m)
- The differentiable Hard-MAJORITY operator (13m)
- The hardening layer (11m)
- The hardening operation (19m)
- A classifier architecture (20m)
- Neural logic nets (15m)
- Learning XOR (parity) (10m)
- Numerical regression (23m)
- If-Then-Else neuron (23m)
- Neural conditions and actions (24m)
- Neural decision lists (15m)
- Boolean logic nets and MNIST (18m)
- Neural logic nets for differentiable QL (30m)