-
Notifications
You must be signed in to change notification settings - Fork 15
QHBMs as second argument in VQT and QMHL #114
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for getting this out quick, Sahil! VQT lgtm from correctness perspective, still need to check QMHL and tests. Sharing current slate of comments in parallel with my reviewing of those
|
||
def sample(self, num_samples, mask=True, reduce=True, unique=True): | ||
bitstrings, counts = self.ebm.sample(num_samples) | ||
return self.qnn.sample( | ||
bitstrings, counts, mask=mask, reduce=reduce, unique=unique) | ||
|
||
def expectation(self, operators, num_samples, reduce=True): | ||
def expectation(self, operators, num_samples, mask=True, reduce=True): | ||
"""TODO: add gradient function""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for the todo, it should be a normal comment with the format:
# TODO(#xxx): add gradient function
where xxx is the unique GH issue number. I know that this isn't necessarily correct in the rest of the repo
@@ -111,10 +158,12 @@ def density_matrix(self): | |||
|
|||
def fidelity(self, sigma: tf.Tensor): | |||
"""TODO: convert to tf.keras.metric.Metric | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
undo diff to l161, l166?
bitstrings, counts = self.ebm.sample(num_samples) | ||
circuits = self.qnn.circuits(bitstrings) | ||
return circuits, counts | ||
def circuits(self, num_samples, unique=True, resolve=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docstring which also explains what unique
and resolve
do?
symbol_values=None, | ||
reduce=True, | ||
mask=True): | ||
"""TODO: add gradient function""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here re todo comment format
"""Calculate the QMHL loss of the qhbm model against the target. | ||
|
||
This loss is differentiable with respect to the trainable variables of the | ||
model. | ||
|
||
Args: | ||
qhbm_model: Parameterized model density operator. | ||
target_circuits: 1-D tensor of strings which are serialized circuits. | ||
These circuits represent samples from the data density matrix. | ||
target_circuits: 1-D tensor of strings which are serialized circuits. These |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update docstring since args changed
|
||
|
||
def vqt(model, num_samples, hamiltonian, beta): | ||
def vqt(qhbm_model, hamiltonian, beta=1.0, num_samples=1000): | ||
"""Computes the VQT loss of a given QHBM against given thermal state params. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update docstring to reflect new usage
beta_expectations = beta * hamiltonian.ebm.operator_expectation( | ||
expectation_shards) | ||
else: | ||
raise NotImplementedError() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raise NotImplementedError("explanation...")
@@ -185,21 +196,23 @@ def _expectation_function(self, circuits, counts, operators, reduce=True): | |||
""" | |||
num_circuits = tf.shape(circuits)[0] | |||
num_operators = tf.shape(operators)[0] | |||
tiled_values = tf.tile(tf.expand_dims(self.values, 0), [num_circuits, 1]) | |||
tiled_operators = tf.tile(tf.expand_dims(operators, 0), [num_circuits, 1]) | |||
if symbol_values is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check if symbol_names
is none and use self.symbols
by default. similarly for symbol_values
with self.values
expectation_shards = qhbm_model.qnn.pulled_back_expectation( | ||
circuits, counts, qhbm_model.operator_shards) | ||
expectation = qhbm_model.ebm.operator_expectation(expectation_shards) | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this branch is untested so we should either test it or remove it (adding a TODO for once we're ready for MLP support)
num_samples=tf.constant(int(5e6))) | ||
gradient = tape.gradient(loss, test_qhbm.trainable_variables) | ||
for grad in gradient: | ||
self.assertAllClose(grad, tf.zeros_like(grad), rtol=RTOL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we get a non-zero test e.g., borrowing from one of the others and setting the model hamiltonian to match the borrowed test hamiltonian?
circuits = self.qnn.circuits(bitstrings) | ||
return circuits, counts | ||
def circuits(self, num_samples, unique=True, resolve=True): | ||
if unique: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if unused=False
has no actively supported use case, then let's remove this feature addition?
No description provided.