-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Cutout in Numpy, PyTorch, and TensorFlow #1850
Implement Cutout in Numpy, PyTorch, and TensorFlow #1850
Conversation
Codecov Report
@@ Coverage Diff @@
## dev_1.13.0 #1850 +/- ##
==============================================
- Coverage 85.87% 84.55% -1.32%
==============================================
Files 248 251 +3
Lines 23310 23464 +154
Branches 4212 4244 +32
==============================================
- Hits 20017 19841 -176
- Misses 2230 2560 +330
Partials 1063 1063
|
This pull request introduces 1 alert when merging 2ddd96c into 8641de9 - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging ee7f856 into 8641de9 - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging 0e704fc into 8641de9 - view on LGTM.com new alerts:
|
0e704fc
to
1f12a39
Compare
This pull request introduces 1 alert when merging 1f12a39 into 89bf92f - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging 7285511 into 89bf92f - view on LGTM.com new alerts:
|
512fc84
to
4bb08dd
Compare
This pull request introduces 2 alerts when merging 21a54c2 into 89bf92f - view on LGTM.com new alerts:
|
This pull request introduces 2 alerts when merging 214c2e7 into 89bf92f - view on LGTM.com new alerts:
|
@beat-buesser once you've taken a look, please let me know if you think there should really be a different implementation for each framework. I don't believe there is much benefit from having framework specific implementations for PyTorch and TensorFlow since there is not much being done that offers a speedup from GPU. In fact, the bottleneck of converting between |
This pull request introduces 2 alerts when merging 8e47a93 into 89bf92f - view on LGTM.com new alerts:
|
Hi @f4str That's a good question. Can we think of an application where accurate gradient back-propagation would be useful? E.g for an adaptive poisoning attack on DP-InstaHide? |
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
8e47a93
to
6131f95
Compare
This pull request introduces 2 alerts when merging 6131f95 into 89bf92f - view on LGTM.com new alerts:
|
Hi @beat-buesser thank you for the response. There may be some scenarios where accurate gradient backprop is useful. Adaptive poisoning attacks are definitely one use-case since the DP-InstaHide paper does actually evaluate against adaptive attacks, specifically using gradient matching (Witches' Brew). Whitebox evasion attacks like PGD might also be a use-case since that requires full gradients to the original image. However, since DP-InstaHide (and all of the data augmentation algorithms) are randomized, it is unclear how important accurate gradient backprop really is. |
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
This pull request introduces 3 alerts when merging 99f1b53 into 89bf92f - view on LGTM.com new alerts:
|
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
This pull request introduces 5 alerts when merging 1db4e0a into 89bf92f - view on LGTM.com new alerts:
|
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
This pull request introduces 7 alerts when merging 3156f32 into 89bf92f - view on LGTM.com new alerts:
|
Hi @f4str The augmentations are random, but an adaptive attacker still can take advantage of accurate gradients corresponding to the respective randomly sampled augmentation. |
Hi @beat-buesser that is true. In that case, it does make sense to have framework specific implementations to ensure that gradients are accurate. This is already the case, I'll just continue as is. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @f4str Thank you very much for contributing the first set of data augmentation preprocessors and congratulations to your first contribution to ART!
Description
Implementation of the Cutout data augmentation defense in the Numpy, PyTorch, and TensorFlow frameworks.
Fixes # (issue)
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
Test Configuration:
Checklist