Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Student Feedback - Chapter 17 #410

Open
jasonjabbour opened this issue Aug 27, 2024 · 1 comment
Open

Student Feedback - Chapter 17 #410

jasonjabbour opened this issue Aug 27, 2024 · 1 comment
Assignees
Labels
improvement Improve existing content

Comments

@jasonjabbour
Copy link
Collaborator

Chapter 17 - Robust AI

  • First and foremost, this chapter was incredibly long -- nearly double the size of some of the other lengthier chapters in this book. It was so much material that it was hard for us to fully grasp what we were reading. We would recommend cutting down immensely. Some of the ways you can do that is by limiting the number of examples in 17.2 Real-World Examples (one per section would work fine), and cutting down paragraph size in the bullet point-like text in 17.4.2 Data Poisoning, 17.4.3 Distribution Shifts, 17.4.4 Detection and Mitigation, 17.5 Software Faults, and 17.6.3 Software-based Fault Injection Tools. Case studies also felt like too much to add to this already lengthy chapter, so cutting those could be beneficial.
  • Again, Case Study naming/formatting should be standardized across all chapters.
  • In sticking with other chapter formats, 17.2 Real-World Examples should probably be renamed "Historical Precedent." Also, the splitting into Cloud/Edge/Embedded subsections here felt unnecessary, and only worked to make the section longer.
  • The figure in 17.3.1 Transient Faults does not have a number/name assigned to it.
  • We felt that including a picture of a slideshow in Figure 17.13 was distracting -- perhaps cut or replace with actual image of visualization.
  • We didn't understand Figure 17.15, 17.38 and 17.39.
  • "Adversarial Attacks" in 17.4.1 Adversarial Attacks has already been defined earlier in the book, and does not warrant another definition. Same can be said for "Generative Adversarial Networks" in the same section.
  • A lot of 17.4.2 Data Poisoning is a reiteration of Chapter 12, and therefore can be cut.

Originally posted by @sgiannuzzi39 in #256 (comment)

@jasonjabbour jasonjabbour self-assigned this Aug 27, 2024
@jasonjabbour jasonjabbour added the improvement Improve existing content label Aug 27, 2024
@jasonjabbour
Copy link
Collaborator Author

Additional Feedback to Address:

  • Discuss a more AI oriented example of a SDC or an attack.

  • On top of figure 17.2 the text reads SDCS for silent data corruptions, it should be SDCs?

  • Add on 17.3 that these can also happen as a result of an attack (Rowhammer)

  • Above figure 17.6 it mentions : “a significant different in the gradient norm” and I am not sur ethe reader at this point would understand what that means in terms of concrete consequences.

  • Right above the start of 17.3.2 makes it seem like BNNs are a solution to the bit flip problem but that’s not what you mean. “Networks [BNNs] (Courbariaux et al. 2016) have emerged as a promising solution”

  • Question: Would different types of AI models have different levels of weaknesses to the different patterns of errorS? Is a CNN more vulnerable than a DNN?

  • At times it feels too general purpose and not specific enough to AI

  • The third paragraph above Figure 17.13 is a bit odd, referring to TMR once and then only talking about the faults of DMR.

  • Maybe include more details about Google’s SDC checker? What is it? How does it know when there is a SDC?

  • Typo in the Greybox Attack bulletpoint in Section 17.4.1 (“black black-box box grey-boxes”)

  • Do you differentiate between AI and Machine learning early on? Here you seem to be using the two interchangeably.

  • Mention Nightshade to Vijay to see if he wants to incorporate it (he cite it as by Tome, may be able to reference the most recent published version of the work:
    Shan, S., Ding, W., Passananti, J., Wu, S., Zheng, H., & Zhao, B. Y. (2024, May). Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models. In 2024 IEEE Symposium on Security and Privacy (SP) (pp. 212-212). IEEE Computer Society.

  • Iris Bahar wrote some work that shows the vulnerability of object identification models by slight perturbations. Maybe you could incorporate her work somewhere. The citation is:
    X. Chen et al., "GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments," 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019, pp. 3988-3995, doi: 10.1109/IROS40897.2019.8967983.

  • Scope and Knowledge bullet points seem out of place in the list of attack examples in section 17.4.2

  • Some of my colleagues at CU are working on integrating Control Systems to ML models to defend against some of the attacks discussed. I believe this is an emerging topic with the controls community. I can get you more information on this if you are interested.

  • I don’t think that Figure 17.26 is what is described as.

  • Distribution shift section seems out of place. It is unclear how the shift characteristics (first bullet point list) and the manifestation forms (second bulleted list) are different. They sound about the same as the list above it.

  • Is this a generally well understood concept? “Uncertainty quantification techniques“ I know about it because I work closely with a colleague in CS who does this but outside of that I would not have known anything about it, but that may just be my lacking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improve existing content
Projects
None yet
Development

No branches or pull requests

1 participant