Skip to content
This repository has been archived by the owner on Aug 18, 2023. It is now read-only.

QHBM Library 0.3.0

Latest
Compare
Choose a tag to compare
@zaqqwerty zaqqwerty released this 25 Feb 21:25
· 19 commits to main since this release
9028458

Overall, this release is a rewrite of most of the library. These changes realize significant increases in code simplicity, modularity, and generality.

Stages in QHBM Library development

The changes can be understood by reviewing the history of the structure of the library:

Version 0.1.0

  • This initial version of the library had all EBM and QNN functionality embedded in one large QHBM class. This kithcen-sink class had separate management for each of the components of a QHBM: the EBM and its parameters thetas, and the QNN and its parameters phis. Only a single 1D tf.Variable could be used for each of thetas and phis. This class was a direct lifting of the mathematical definition of a QHBM into code.
  • There was no ability to use tf.GradientTape() to take derivatives of the VQT and QMHL loss; instead, there were separate functions for the forward pass and it's derivative, that the user had to call manually to implement gradient descent.

Versions 0.2.0 and 0.2.1

  • Unbundled EBMs and QNNs from the QHBM class to be viable on their own. This simplified the construction of QHBMs, since EBMs and QNNs could now be built individually, then composed to form a QHBM. However, building new EBMs was still a heavy process; a whole new class had to be implemented for each energy function the user might wish to define.
  • Here we upgraded the VQT and QMHL losses into functions which could be differentiated using standard tf.GradientTape() methods. However, their implementations were not very general; for example, VQT could only accept Hamiltonians specified as cirq.PauliSum. There are many cases where it is desirable to instead express observables directly in terms of a quantum circuit plus a function defining the spectrum of the observable.

Version 0.3.0

  • This release takes inspiration from the distinction between model and inference to further modularize and simplify the structure of the code. Given some problem to solve in machine learning, here is how we think about the distinction:
    • model: a representation of some structure assumed to be present in the problem. For example, when trying to learn a probability distribution, an energy function is a model: the structure it represents is the relative probabilities of samples from the probability distribution you are trying to learn. Models are typically parameterized so that they can be updated to better represent the actual structure present in the problem.
    • inference: a process for obtaining results from a model. For example, MCMC is a process which obtains samples from the probability distribution corresponding to an energy function.
  • To support this distinction, we now provide a models subpackage for defining energy functions and quantum circuits. Separately, we now provide an inference subpackage for defining EBMs (classes for sampling according to an energy function) and QNNs (classes for measuring observables at the output of a given quantum circuit).
  • Further theoretical analysis revealed to us that the derivatives of VQT and QMHL losses could be composed from more general derivatives of the underlying computations. This means the VQT and QMHL functions do not need to define their own custom gradients anymore. Instead, the more general derivatives of log partition functions and EBM expectation values are implemented.

New features

Below we list in finer granularity some of the improvements made in this release.

  • models subpackage
    • Classical models
      • BitstringEnergy class, for defining an energy function as a stack of tf.keras.layers.Layer instances.
      • Two hardcoded subclasses, BernoulliEnergy for representing a tensor product of Bernoullis and KOBE for representing Kth order binary energy functions.
    • Quantum models
      • QuantumCircuit class, for defining unitary transformations. This is an abstraction layer on top of TensorFlow Quantum. As a benefit of this abstraction, users are able to add and invert circuits, while the underlying class manages tracking all associated variables, symbols, and circuit tensors. Sort of like a generalization of the PQC layer in TFQ.
      • DirectQuantumCircuit class, for automatically upgrading a parameterized cirq.Circuit into a QuantumCircuit.
    • Hybrid models
      • Hamiltonian class for representing general observables. This class accepts a BitstringEnergy to represent eigenvalues, and a QuantumCircuit to represent eigenvectors.
  • inference subpackage
    • Classical inference
      • EnergyInference is a base class for working with probability distributions corresponding to BitstringEnergy models. In other words, this class is an EBM. A critical feature is its ability to take derivatives: given a function f acting on samples x from the EBM, this class can take the derivative of f(x) with respect to both the variables of f and the variables of the BitstringEnergy defining this EBM.
      • AnalyticEnergyInference class for exact sampling from any BitstringEnergy (subject to memory constraints on the number of bits). Works by calculating the probability of every bitstring, then sampling them as a categorical distribution.
      • BernoulliEnergyInference class for exact sampling from BernoulliEnergy models.
      • probabilities function for calculating the vector of probabilities corresponding to a given BitstringEnergy model.
    • Quantum inference
      • QuantumInference class for encapsulating the circuit, symbol name, symbol value, and observable management required for expectation calculation with TFQ. Enables measuring some kinds of Hamiltonians beyond simple cirq.PauliSums.
      • unitary function for calculating the unitary matrix represented by a given QuantumCircuit.
    • Hybrid inference
      • QHBM class. Here is where all the pieces come together. It is initialized with an EnergyInference and QuantumCircuit instance. It enables measuring the expectation values of observables against QHBMs. The modular Hamiltonian corresponding to the QHBM is an property of QHBM returned as a Hamiltonian class.
      • density_matrix function for calculating the matrix corresponding to a given QHBM, useful for numerics when the number of qubits is small enough.
      • fidelity function for calculating the fidelity between a QHBM and a density matrix. Again, useful for numerics when the number of qubits is small enough.
    • Losses
      • vqt function for calculating the VQT loss given a QHBM, a hamiltonian, and an inverse temperature. Fully differentiable in tf.GradientTape() contexts.
      • qmhl function for calculating the QMHL loss between a QuantumData and a QHBM. Fully differentiable in tf.GradientTape() contexts.
  • data subpackage
    • QuantumData class for encapsulating any sources of data to which we have quantum processing access.
    • QHBMData class for defining a source of quantum data as the output from a QHBM.
  • Other improvements
    • Enabled yapf lint checks for pull requests.
    • Added a nightly build publisher, triggered on merges into the main branch.
    • updated TFQ dependency to latest version, 0.6.1

Breaking changes

Note that almost all APIs have been changed and simplified. See the test cases for examples of usage. As well, keep an eye out for our upcoming baselines release (issue #197), where we will showcase some of our research code.

Further details

Full commit list

New Contributors

Link to changelog

v0.2.1...v0.3.0