Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: migrating from contrib to keras regularizer #114

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Rishav1
Copy link

@Rishav1 Rishav1 commented Jul 23, 2020

Example was outdated. Fixed to use keras L2 regularizer.

@googlebot googlebot added the cla: yes cla: yes label Jul 23, 2020
@Rishav1 Rishav1 force-pushed the fix_mnist_lr_example branch from 9104a53 to f0cbcbb Compare July 23, 2020 15:42
@@ -169,14 +165,17 @@ def print_privacy_guarantees(epochs, batch_size, samples, noise_multiplier):
np.linspace(20, 100, num=81)])
delta = 1e-5
for p in (.5, .9, .99):
steps = math.ceil(steps_per_epoch * p) # Steps in the last epoch.
coef = 2 * (noise_multiplier * batch_size)**-2 * (
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if first term in coef is correct here:

  • The Lipshitz constant (using data_l2_norm) is missing.
  • Why is batch_size here? If the RDP of Subsampled Gaussian Mechanism result from table 1 of this work is being used, shouldn't q = batch_size / steps_per_epoch instead of q = 1 / batch_size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes cla: yes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants