Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: migrating from contrib to keras regularizer #114

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 15 additions & 16 deletions tutorials/mnist_lr_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,6 @@
from tensorflow_privacy.privacy.analysis.rdp_accountant import get_privacy_spent
from tensorflow_privacy.privacy.optimizers import dp_optimizer

if LooseVersion(tf.__version__) < LooseVersion('2.0.0'):
GradientDescentOptimizer = tf.train.GradientDescentOptimizer
else:
GradientDescentOptimizer = tf.optimizers.SGD # pylint: disable=invalid-name

FLAGS = flags.FLAGS

flags.DEFINE_boolean(
Expand All @@ -66,10 +61,10 @@ def lr_model_fn(features, labels, mode, nclasses, dim):
logits = tf.layers.dense(
inputs=input_layer,
units=nclasses,
kernel_regularizer=tf.contrib.layers.l2_regularizer(
scale=FLAGS.regularizer),
bias_regularizer=tf.contrib.layers.l2_regularizer(
scale=FLAGS.regularizer))
kernel_regularizer=tf.keras.regularizers.l2(
l=FLAGS.regularizer),
bias_regularizer=tf.keras.regularizers.l2(
l=FLAGS.regularizer))

# Calculate loss as a vector (to support microbatches in DP-SGD).
vector_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
Expand All @@ -91,7 +86,8 @@ def lr_model_fn(features, labels, mode, nclasses, dim):
learning_rate=FLAGS.learning_rate)
opt_loss = vector_loss
else:
optimizer = GradientDescentOptimizer(learning_rate=FLAGS.learning_rate)
optimizer = tf.train.GradientDescentOptimizer(
learning_rate=FLAGS.learning_rate)
opt_loss = scalar_loss
global_step = tf.train.get_global_step()
train_op = optimizer.minimize(loss=opt_loss, global_step=global_step)
Expand Down Expand Up @@ -169,14 +165,17 @@ def print_privacy_guarantees(epochs, batch_size, samples, noise_multiplier):
np.linspace(20, 100, num=81)])
delta = 1e-5
for p in (.5, .9, .99):
steps = math.ceil(steps_per_epoch * p) # Steps in the last epoch.
coef = 2 * (noise_multiplier * batch_size)**-2 * (
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if first term in coef is correct here:

  • The Lipshitz constant (using data_l2_norm) is missing.
  • Why is batch_size here? If the RDP of Subsampled Gaussian Mechanism result from table 1 of this work is being used, shouldn't q = batch_size / steps_per_epoch instead of q = 1 / batch_size?

# Accounting for privacy loss
(epochs - 1) / steps_per_epoch + # ... from all-but-last epochs
1 / (steps_per_epoch - steps + 1)) # ... due to the last epoch
steps = math.ceil(steps_per_epoch * p) # Steps in the last epoch
# compute rdp coeff for a single differing batch
coeff = 2 * (noise_multiplier * batch_size)**-2
# amplification by iteration from all-but-last-epochs
amp_part1 = (epochs - 1) / steps_per_epoch
# min amplification by iteration for at least p items due to last epoch
amp_part2 = 1 / (steps_per_epoch - steps + 1)
# compute rdp of output model
rdp = [coeff * order * (amp_part1 + amp_part2) for order in orders]
# Using RDP accountant to compute eps. Doing computation analytically is
# an option.
rdp = [order * coef for order in orders]
eps, _, _ = get_privacy_spent(orders, rdp, target_delta=delta)
print('\t{:g}% enjoy at least ({:.2f}, {})-DP'.format(
p * 100, eps, delta))
Expand Down