Skip to content

Commit

Permalink
Merge pull request #57 from very-good-science/little-fixes
Browse files Browse the repository at this point in the history
Small pre-workshop fixes
  • Loading branch information
NatalieZelenka authored Sep 17, 2021
2 parents 624daf4 + 0805157 commit 3fa3997
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion site/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@

html_theme_options = {
'github_url': 'https://github.com/very-good-science/data-hazards',
'twitter_url': 'https://twitter.com/hashtag/DataEthicsClub',
'twitter_url': 'https://twitter.com/hashtag/DataHazards',
'search_bar_text': 'Search this site...',
'show_prev_next': False,
"footer_items": ["license-footer", "sphinx-version"],
Expand Down
8 changes: 4 additions & 4 deletions site/contents/materials/workshop/data-hazards.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,15 +146,15 @@ __Safety Precautions:__
__Hazard: Difficult to understand__
There is a danger that the technology is difficult to understand.
This could be because of the technology itself is hard to interpret (e.g. neural nets), or it's implementation (i.e. code is hidden and we are not allowed to see exactly what it is doing).
This could be because of the technology itself is hard to interpret (e.g. neural nets), or problems with it's implementation (i.e. code is not provided, or not documented).
Depending on the circumstances of its use, this could mean that incorrect results are hard to identify, or that the technology is inaccessible to people (difficult to implement or use).
^^^
__Example 1:__ Google does not make code available for many projects, from it's DeepMind AlphaFold [protein-folding research](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology) to its' [Search Engine algorithms](https://www.searchenginejournal.com/google-algorithm-history/).
__Example 1:__ Deep learning is used to perform [credit-scoring](https://www.moodysanalytics.com/risk-perspectives-magazine/managing-disruption/spotlight/machine-learning-challenges-lessons-and-opportunities-in-credit-risk-modeling) (i.e. could deny people credit), but it is difficult to understand (and therefore check) what these decisions are based on.
__Example 2:__ Deep learning is used to perform [credit-scoring](https://www.moodysanalytics.com/risk-perspectives-magazine/managing-disruption/spotlight/machine-learning-challenges-lessons-and-opportunities-in-credit-risk-modeling) (i.e. could deny people credit), but it is difficult to understand (and therefore check) what these decisions are based on.
__Example 2:__ Even when journals have a policy of having code and data availability, published researchers can be unaware of what they agreed to and resist sharing it, as [this](https://www.pnas.org/content/115/11/2584) paper surveying Science publications shows.
+++
__Safety Precautions:__
Expand All @@ -165,7 +165,7 @@ __Safety Precautions:__
:img-top: /images/hazards/direct-harm.png
__Hazard: May cause direct harm__
The application area of this technology means that it is capable of causing direct physical harm to someone if it malfunctions, even if used correctly e.g. healthcare, driverless vehicles.
The application area of this technology means that it is capable of causing direct physical or psychological harm to someone even if used correctly e.g. healthcare and driverless vehicles may be expected to directly harm someone unless they have 100% accuracy.
^^^
Expand Down

0 comments on commit 3fa3997

Please sign in to comment.