diff --git a/03-Binary_data_to_computations.Rmd b/03-Binary_data_to_computations.Rmd
index 441598a..ad66681 100644
--- a/03-Binary_data_to_computations.Rmd
+++ b/03-Binary_data_to_computations.Rmd
@@ -16,8 +16,10 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 ### **CPU** - Central Processing Unit
 
 
+
 The CPU, the “Central Processing Unit”, is often called **the brain** of the computer. Like its name, it is one of the most important and prominent parts of the computer, performing and orchestrating computational tasks [@braunl_central_2008, @CPU_redhat, @Wikipedia_CPU_2021]. 
 
+
 The CPU is sometimes called a **processor** or **microprocessor** (however, technically, these terms include both the CPU and other elements). The CPU is often what people are referring to when they describe a **"computer chip"** (which again, technically includes other elements) [@braunl_central_2008, @CPU_redhat, @Wikipedia_CPU_2021]. 
 
 The CPU is made up of several components, a few of which holds particular importance. We already discussed two of those components: 
@@ -38,9 +40,11 @@ Modern computers now have multiple cores. What does this mean?
 
 This means that there are multiple groups of the above components that can each process data within the same computer.  A dual core CPU is a chip with two cores. A quad-core CPU is a chip with 4 cores and so on. This allows modern computers to perform multiple tasks at the same time, instead of performing tasks sequentially. For example, a typical laptop with 4 cores nowadays can perform 4 tasks simultaneously. This ability to multitask makes our computers much faster than they used to be [@Wikipedia_CPU_2021]. 
 
+
 In addition to the main CPU (or CPUs, or cores, depending on your favorite name), computers may be equipped with specialized processors called [GPUs](https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html#), which stands for graphics processing units, that are especially efficient at tasks involving images [@GPU]. Therefore, tasks that involve images are often performed using the GPU(s) and not the CPU(s). This enables more efficient processing of data by freeing up the CPU(s) to focus on tasks not involving images. Note, however, that GPU processors are also "generally programmable" (meaning they can work with different types of data) and can also be used to perform tasks that don't involve images [@GPU]. They are also very good at doing something called parallel processing, which means dividing up a single task into multiple pieces that can be run simultaneously and thus allowing for individual task processes to be more effective overall. People also use GPU graphics cards to add additional GPUs to their computers for more computational power [@GPU].
 
 
+
 ```{r, fig.align='center', echo = FALSE, fig.alt= "A computer chip is also sometimes called the CPU. Inside this CPU or chip  are often multiple cores.", out.width= "100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.gf6e632d05f_0_381")
 ```
@@ -55,6 +59,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 We have already talked about how data can be stored in the registers within the CPU. This data or memory is used directly by the CPU during operations or tasks. However, our CPUs need additional quick access to instructional data to tell the CPU what to do to perform the operations and what data to use. This is also the data in a file that we are working with at a particular moment in time [@RAM_ComputerHope]. This brings us to [RAM](https://www.computerhope.com/jargon/r/ram.htm), which stands for **Random Access Memory**.  It is often simply referred to as **memory**. RAM is made out of transistors and capacitors, similar to the registers within the CPU, but it is located outside of, but very near, the CPU [@RAM_ComputerHope; @RAM_HowStuff_Works]. One characteristic of this type of memory is that it is temporary. Data is stored in RAM for only a short time while your computer is running a task on it, then it disappears afterwards. Due to the fact that the stored memory disappears afterwards, this type of memory is also called volatile.  This is why when you forget to save a file you are working on, you might lose your work [@RAM_ComputerHope; @RAM_HowStuff_Works]. 
 
+
 For more information about how RAM works, check out this [website](https://computer.howstuffworks.com/ram.htm) [@RAM_HowStuff_Works].
 
 
@@ -62,12 +67,14 @@ For more information about how RAM works, check out this [website](https://compu
 
 ### **Storage**  - long-term memory
 
+
 We can also store data that we aren't directly using when our computer is performing operations; for example, our excel files and word files that aren't currently in use. This type of memory is called storage memory and is sometimes referred to as long-term or non-volatile memory, because the data can be preserved without using electricity. This type of memory is stored using [hard disk drives (HDDs), also called hard drives](https://www.computerhope.com/jargon/h/harddriv.htm), or more recently, [solid-state drives (SSDs)](https://www.computerhope.com/jargon/s/ssd.htm). The reason why accessing this memory is slower than accessing data stored in RAM is that it is located further away from the CPU, and data needs to be transferred from the storage to the CPU  when a user wants to perform operations on such data. In addition, the right data needs to be found from all of your files, which also takes some time. Furthermore, the way in which data is retrieved from HDDs and SSDs is slower than that of RAM. However, this type of storage allows for much larger data capacity than RAM, and it is also cheaper [@hard_drive; @hard_drive_works].
 
 Hard disk drives store memory using [magnetic methods](https://www.extremetech.com/computing/88078-how-a-hard-drive-works) [@hard_drive_works], while solid-state drives store memory using chips that have, guess what?
 
 They are made of yet again the important basic building block of computers, the tiny bees - oops, I mean transistors! - just like the CPU chip! See how important transistors are!?
 
+
 SSDs allow for much faster reading and writing of files, as well as increased reliability. However, they are more expensive and can eventually wear out [@SSD]. 
 
 Here's a great explanation for how HDDs work and the difference with SSDs. It will also introduce the concept of [caching](https://en.wikipedia.org/wiki/CPU_cache), which allows for faster use of data from storage for the CPU. It is a special kind of memory that's even faster and closer to the CPU than RAM [@Wikipedia_cache_2021]:
@@ -102,11 +109,13 @@ Examples of commonly used operating systems on computers and phones are:
 * Linux  
 * Android
 
+
 Recall that we previously talked about how computers today are often called 64-bit? Operating systems are also designed in this way. A 64-bit operating system expects the hardware of the computer to allow for processing 64 bits of data at a time (the **word size**) [@Wikipedia_word_length_2021]. If we have registers of at least this length in the CPU, then we can in fact perform operations on data that may be up to 64 bits in length. The data do not __have__ to be the full 64 bits; it just means that we can perform operations on values that take up less than 64 bits. 
 
 This can be important because if you try to use an operating system that expects a longer data size than the hardware can accommodate, for example a 64-bit operating system on a 32-bit computer, this will not work. Application programs are also designed according to different data sizes and again you need to choose options that are equal to or less than the data size that your CPU can accommodate [@ComputerHope_64-bit]. However, you can run a 32-bit operating system on a 64-bit computer, and a 32-bit application on a 64-bit operating system, but you may experienced reduced efficiency. See this [article for more information on what happens when we use a 32-bit application with a 64-bit operating system](https://medium.com/codixlab/what-happens-when-a-32-bit-program-runs-on-a-64-bit-machine-c231ac3ddb2f).
 
 
+
 ### Historical context
 
 Previously, back when computers were so large and expensive that one whole university might have had just one computer (they didn't have those nifty small transistors of today), computers didn't have sophisticated operating systems. During that era, only one task could be performed at a time, by one person at a time. Back then, tasks were just manually started, prioritized, and scheduled by humans. Tasks or programs, and sometimes data, could be printed or punched on cards (called punchcards, punch cards or punched cards) that would be loaded into the machine. Data and code would be manually indicated by punching or creating a hole in the card in certain locations. For example, columns might indicate different numeric or alphabetical values. It could really be a pain for users if they accidentally dropped the cards for the program they wanted to run, as you can imagine [@punched_card_2021]!
@@ -125,8 +134,10 @@ The first operating system allowed different programs to be run sequentially wit
 Check out this [video](https://www.youtube.com/watch?v=KG2M4ttzBnY) if you want to learn more about how these punch cards worked. See @OS_2017 for more information about operating systems and @punched_card_2021 for really interesting information about the history of punched cards.
 Also check out @hardware_history_2021 for more interesting and extensive history about how computer hardware was developed.
 
+
 Also, here is some fascinating additional reading on the role of women as computer operators starting in the 1940s. Initially, computer science was actually thought of as a field for women; however, this changed over time to be skewed in the opposite direction. Women and gender minorities are hopefully becoming more represented in this field. See our [leadership course](https://jhudatascience.org/Informatics_Research_Leadership/promoting-diversity-equity-and-inclusion.html) for tips on how to better support more inclusive practices in our research labs.
 
+
 * [Article titled: Woman pioneered computer programming. Then men took their industry over](https://pages.memoryoftheworld.org/library/Josh%20O%27Connor/Women%20pioneered%20computer%20programming.%20Then%20men%20took%20their%20industry%20over_%20%28321%29/Women%20pioneered%20computer%20programming.%20Then%20-%20Josh%20O%27Connor.pdf) [@visions_women_2017]
 * [Article titled: Untold History of AI: Invisible Women Programmed America's First Electronic Computer The “human computers” who operated ENIAC have received little credit](https://spectrum.ieee.org/untold-history-of-ai-invisible-woman-programmed-americas-first-electronic-computer) [@untold_2019]
 
diff --git a/04-Computing_Systems.Rmd b/04-Computing_Systems.Rmd
index 27f4ca1..42b3933 100644
--- a/04-Computing_Systems.Rmd
+++ b/04-Computing_Systems.Rmd
@@ -19,9 +19,11 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 Recall that the smallest unit of data is a bit, which is either a zero (0) or a one (1). A group of 8 bits is called a byte, and most computers, phones, and software programs are constructed or designed in a way to accommodate groups of bytes at a time. For example a 32-bit machine can work with 4 bytes at a time and a 64-bit can work with 8 bytes at a time. But how big is a file that is 2 GB? When we sequence a genome, how large is that in terms of binary data? Can our local computer work with the size of data that we would like to work with?
 
 
+
 First, let's take a look at how the size of binary data is typically described and what this means in terms of bits and bytes:
 
 
+
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Table of different binary data units showing the name, abbreviation, and size in bits or bytes, for example a Byte is abbreviated as B and this represents 8 bits, while Gigabyte is abbreviated GB and represents roughly 1 billion bytes", out.width="100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.gfb2e21ecdc_0_8")
 ```
@@ -43,9 +45,11 @@ We have discussed a bit about CPUs and how they can help us perform more than on
 
 This means that typical laptops can multitask quite well, have in some cases 16 gigabytes for random access memory to allow the CPU to work on relatively large tasks (as we can see from the previous table that GB are actually pretty large when you think about it), and possibly 1TB for the hard drive (and/or SSD), meaning that you can store thousands of photos and files like PDFs, word documents, etc. It turns out that 250GB allows you to store around 30,000 average-size photos, so a 1TB laptop can store quite a large amount of data. Therefore, overall, typical laptops today are pretty powerful devices, especially compared to computers of previous generations. That being said, note that some programs require 16 or even 32 GB of memory to run.
 
+
 - **Desktops** can perform and store data similarly to laptops. However, they sometimes have slightly better performance and storage compared to a laptop for a similar price.  Since less work needs to be done to make the desktop small and portable, sometimes you can get better storage and performance for the same price as a laptop. Furthermore, desktops often have better graphics processing capacity and displays [@antonio_villas-boas_laptops_2019].  This might be important to consider if you are going to need to visually inspect many images. Another benefit is that you can also sometimes find desktops with larger memory and storage options right off the shelf than typical laptops. It is also generally easier to add more memory to a desktop than it is to add to a laptop [@antonio_villas-boas_laptops_2019]. However of course, desktops certainly aren't super portable!
 
 
+
 * Some **phones** can compete with laptops by performing 6 CPU tasks at once and storing 6 GB in memory and 250 GB of storage.  
 
 
@@ -57,10 +61,12 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 Check out this [link](https://www.apple.com/mac/compare/?modelList=iMac,MacBook-Pro-14,MacBook-Pro-16-2021) to compare the prices of different Macs and this [link](https://www.hp.com/us-en/shop/slp/weekly-deals) to compare specs for PC computers from HP. 
 
+
 If you want to get really in-depth comparisons between different PC or Windows computers, check out this [link](https://www.userbenchmark.com/PCBuilder/Custom/S0-M1487712vsS0-M?tab=RAM) [@userbenchmark].
 
 
 
+
 ### Checking your computer capacity - Mac
 
 Now, what about __your__ computer? How do you know how many cores it has or how much memory and storage it has?
@@ -74,10 +80,12 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 ```
 
 First, we see the operating system is called MacOS Mojave.
+
 Next, we see that the processor (which we now know is the CPU) is a 2.6 GigaHertz (GHz) Intel Core i7 chip. This means that the processor or CPU can process 2,600,000,000 operations in a second (this is called a [clock cycle](http://www.techopedia.com/definition/5498/clock-cycle)) [@clock_cycle]. That's a lot compared to older computers in the 1980s, which had clock cycle rates or [clock rates](https://en.wikipedia.org/wiki/Clock_rate) in the MegaHertz range [@clock_rate]!
 If we look deeper into this chip, we would learn that it has 4 cores and has hyper-threading. This allows it to effectively perform 8 tasks at once [@hyperthreading].
 Below, we see that there are 16 Gigabytes of memory - this is how much RAM it has - and also 2133 MegaHertz (aka 2.133 GHz) of low power double data rate random access memory (LPDDR3). This means that the RAM can process 2,133,000,000 commands every second [@RAM_speed; @mukherjee_ram_2019]. You can checkout more about what this means at this blog post @scott_thornton_RAM. Generally evaluating the amount of RAM is helpful in assessing performance [@RAM_speed; @mukherjee_ram_2019]. 
 
+
 If we click on the storage button at the top, we can learn about how much storage is available on the computer. If you hover over a section, it tells you what type of files are accounting for that particular section of storage that is being used.
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Mac storage information showing 1 TB capactity", out.width="100%"}
@@ -158,6 +166,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 Note that depending on the study requirements, several images may be needed for each sample. Therefore, data storage needs can add up quickly.
 
+
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Example table of overall file storage needs for samples in imaging studies.", out.width="100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.gfb2e21ecdc_0_25")
 ```
@@ -178,8 +187,10 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 ### Checking file sizes on Mac
 
+
 If you own a Mac and want to check the size of a particular file, you can find it by locating your file within a finder window. You can open a new finder window by clicking on the button that looks like a square with two colors and a face (see image below), typically in the bottom left corner on your dock (the strip of icons on your Mac screen) to help you navigate to different application programs.
 
+
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Mac finder button", out.width="100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.gf9c252d058_0_120")
 ```
@@ -289,12 +300,15 @@ See [here](https://pediaa.com/difference-between-cluster-and-grid-computing/)  a
 
 More recently, the ["Cloud"](https://en.wikipedia.org/wiki/Cloud_computing) has become a common computing option. The term "cloud" has become a widely used buzzword [@cha_cloud_2015] that actually has a few slightly different definitions that have changed overtime, making it a bit tricky to keep track of. However,  "cloud" typically describes large computing resources that involve the connection between **multiple servers** in multiple locations [@cloud_2022] using the internet. See [here](https://www.redhat.com/en/topics/cloud-computing/cloud-vs-virtualization) for a deeper description of what the term cloud means today and how cloud computing compares to other more traditional shared computing options [@cloud_deeper].
 
+
 Many of us use cloud storage regularly for Google Docs and backing up photos using iPhoto and Google. Cloud computing for research works in a similar way to these systems, in that you can perform computations or store data using an available server that is part of a larger network of servers. This allows for even more computational dependability beyond a simpler cluster or grid. Even if one or multiple servers are down, you can often still use the other servers for the computations that you might need. 
 
 Furthermore, this also allows for more opportunity to scale your work to a larger extent, as there is generally more computing capacity possible with most cloud resources [@cloudvstrad].
 
 
-Companies like Amazon, Google, Microsoft Azure, and others provide cloud computing resources. **Somewhere these companies have clusters of computers that paying customers use through the internet.**  In addition to these commercial options, there are occasionally national government funded resource options like the Texas Advanced Computing Center (TACC) and others previously funded by the former project called [XSEDE](https://portal.xsede.org/) (described in the next section).  We will compare computing options in another chapter coming up.
+
+Companies like Amazon, Google, Microsoft Azure, and others provide cloud computing resources. **Somewhere these companies have clusters of computers that paying customers use through the internet.**  In addition to these commercial options, there are occasionally national government funded resource options (described in the next section).  We will compare computing options in another chapter coming up.
+
 
 
 
@@ -319,6 +333,7 @@ You may have access to a [HPC (which stands for High Performance Computing) clus
 If your university or institution has a HPC [cluster](https://en.wikipedia.org/wiki/Computer_cluster), this means that they have a group of computers acting like a server that people can use to store data or assist with intensive computations. Often institutions can support the cost of many computers within an HPC cluster. This means that multiple computers will simultaneously perform different parts of the computing required for a given task, thus significantly speeding up the process compared to you trying to perform the task on just your computer! 
 
 
+
 If your institute doesn't have a shared computing resource like the HPCs we just described, you could also consider a national resource option like the [Texas Advanced Computing Center (TACC)](https://en.wikipedia.org/wiki/Texas_Advanced_Computing_Center) which was funded by the National Science Foundation (NSF) [XSEDE](https://www.xsede.org/) program.
 Universities and non-profit researchers in the United States can request access to their computational and data storage resources. Other resource options include:
 
@@ -330,6 +345,7 @@ Universities and non-profit researchers in the United States can request access
 
 Here you can see a photo of Stampede2, one of the supercomputers that members of TACC could utilize (it has now been replaced with Stampede3).
 
+
 ```{r, fig.align='center', echo = FALSE, fig.alt= "An image of Stampede2 one of the supercomputers that members of TACC could use.", out.width= "100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.gf9c252d058_0_63")
 ```
@@ -340,6 +356,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 > Stampede2, generously funded by the National Science Foundation (NSF) through award ACI-1134872, is one of the Texas Advanced Computing Center (TACC), University of Texas at Austin's flagship supercomputers.
 
 
+
 See [this article about Stampede2 and the transition to Stampede3](https://tacc.utexas.edu/news/latest-news/2023/07/24/taccs-new-stampede3-advances-nsf-supercomputing-ecosystem/) for more information about their resources and see [their getting started website](https://tacc.utexas.edu/use-tacc/getting-started) on how you could possibly use their resources.
 
 Importantly when you use shared computers like national resources like [Stampede2](https://tacc.utexas.edu/systems/stampede2/) and [Stampede3](https://docs.tacc.utexas.edu/hpc/stampede3/), as well as institutional HPCs, you will share these resources with many other people and so you need to learn the proper etiquette for using and sharing these resources. We will discuss this more in a coming chapter.
@@ -347,6 +364,7 @@ Importantly when you use shared computers like national resources like [Stampede
 There is also an option to access national computing resources through a cloud environment option called [Jetstream2](https://jetstream-cloud.org/).  
 
 
+
 Here is a video about Jetstream2:
 
 ```{r, fig.align="center", fig.alt = "video", echo=FALSE, out.width="100%"}
@@ -356,7 +374,7 @@ knitr::include_url("https://www.youtube.com/embed/NQ3flxJANTw")
 
 
 
-We will also discuss how the use of these various computing options differ in the next chapters. Importantly there are also some computing platforms that have been especially designed for scientists and specific types of researchers, so it is also useful to know about these options.
+We will also discuss how the use of these various computing options differ in the next chapters. Importantly there are also some computing platforms that have been specially designed for scientists and specific types of researchers, so it is also useful to know about these options.
 
 
 
diff --git a/05-Shared_computing_etiquette.Rmd b/05-Shared_computing_etiquette.Rmd
index 48d0257..a7eb14f 100644
--- a/05-Shared_computing_etiquette.Rmd
+++ b/05-Shared_computing_etiquette.Rmd
@@ -14,17 +14,17 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 We will use the Johns Hopkins Joint High Performance Computing Exchange (JHPCE) cluster resource as an example to motivate the need for usage rules and proper sharing etiquette for such resources.
 
-First let's learn a bit about this JHPCE. For this particular resource there are about 400 active users.It is optimized for genomic and biomedical research and has 4,000 cores! That's right, as you can imagine, this is much more powerful than the individual laptops and desktops that researchers at the university have for personal use, which would typically currently only have around 8 cores. There is also 28TB of RAM and 14 PB of storage!
+First, let's learn a bit about this JHPCE. For this particular resource, there are about 400 active users. It is optimized for genomic and biomedical research and has 4,000 cores! That's right; as you can imagine, this is much more powerful than the individual laptops and desktops that researchers at the university have for personal use, which would currently typically only have around 8 cores. The JHPCE also has 28TB of RAM and 14 PB of storage!
 
 Now that you know more about digital sizes, you can appreciate that this server can allow for much faster processing and really large amounts of storage, as again a researchers' computer might have something like 16 GB of RAM and 1TB of storage. 
 
-There are 68 nodes that make up the JHPCE currently. As, with most clusters some of the nodes are dedicated to managing users logging in to the cluster and some of the nodes are dedicated to data transferring. Each node has 2-4 CPUs that provide 24-128 cores! As you can see these processors or chips have a lot more cores per each CPU than a typical personal computer. 
+At the time of writing, JHPCE had 68 nodes. As with most clusters, some of the nodes are dedicated to managing user access to the cluster and some of the nodes are dedicated to transferring data. Each node has 2-4 CPUs that provide 24-128 cores! As you can see, these processors or chips have a lot more cores per each CPU than a typical personal computer. 
 
-Individual users connect and perform jobs (aka computational tasks) on the cluster using a formal [common pool resource (CPR)](https://en.wikipedia.org/wiki/Common-pool_resource) hierarchy system. What does this mean? This means that it is a shared resource, where if one user overused the resource it would be to the detriment of others and to overcome this there are usage rules and regulations that are enforced by managers of the resource @common-pool_2022.  This is important because if a single or a few users used up all the computing resources one day, then the other nearly 400 users would have to delay their work that day, which would not be fair. 
+Individual users connect and perform jobs (a.k.a. computational tasks) on the cluster using a formal [common pool resource (CPR)](https://en.wikipedia.org/wiki/Common-pool_resource) hierarchy system. What does this mean? Recall that we are talking about shared resources, where if one user overuses the resource, it would be to the detriment of other users' experiences. Rules and regulations are designed to prevent this from happening and are enforced by managers of the resource @common-pool_2022. This is important because if a single or a few users used up all the computing resources one day, then the other nearly 400 users would not be able to perform their work that day, which would not be fair. 
 
 ## General Guidelines for shared computing resources
 
-Each cluster or other shared computing resource will have different rules and requirements, but here are a few general rules to keep in make sure that you don't accidentally abuse the privilege of sharing an amazing resource like this. Don't be too worried, most shared resources will give you guidance about their specific rules and will often also have settings that don't allow users to make major blunders.
+Each cluster or other shared computing resource will have different rules and requirements, but here are a few general rules to keep in make sure that you don't accidentally abuse the privilege of sharing an amazing resource like this. Don't be too worried, as most shared resources will give you guidance about their specific rules and will often also have settings that don't allow users to make major blunders.
 
 ### Security guidelines
 
@@ -42,7 +42,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
  
  - Don't share your password and keep it safe!
  
- If you have a Mac, you could consider storing it in your [Keychain](https://support.apple.com/en-ie/guide/mac-help/mchlf375f392/mac), alternatively if you have a different type of computer or don't like the Mac Keychain, consider [Dashlane](https://www.dashlane.com/) or other password manger services. Luckily both of these options do not come at any extra cost and can be helpful for storing all the passwords we use regularly safely. These are especially good options if your password is difficult for you to remember. Make sure that you abide by any rules regarding storing passwords that might be required by the resource you intend to use. 
+ If you have a Mac, you could consider storing it in your [Keychain](https://support.apple.com/en-ie/guide/mac-help/mchlf375f392/mac). Alternatively, if you have a different type of computer or don't like the Mac Keychain, consider options like [Dashlane](https://www.dashlane.com/) or other password manger services. Luckily the  Mac Keychain does not come at any extra cost and can be helpful for safely storing all the passwords we use regularly safely. These are especially good options if your password is too long or difficult for you to remember. Make sure that you abide by any rules regarding storing passwords that might be required by the resource you intend to use. 
  
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Cartoon - One character says: Hey, what do you have there?. The other character says: Oh just bringing my passwords with me in case I forget. I’ve secured them carefully on paper with invisible ink, in a cypher with its own code, inside a fireproof box with a lock. The original character says: That’s very impressive. You could also just use a password manager. The other character says: Oh that might be good… because this fireproof box is quite heavy!", out.width= "100%"}
@@ -52,15 +52,15 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
  
  - Don't access a server on a computer that is not authorized to do so.
 
-Some servers will require that your computer be authorized for access for added security. It's a good idea to follow these rules. If you can, perhaps authorize a laptop in case you might need to gain access when you need to be out of town. However if you do so, make sure you also only access such servers with a secure WiFi network. One way to ensure this is is to avoid using public WiFi networks. If you must use a public WiFi network, consider using a [virtual private network (VPN)](https://en.wikipedia.org/wiki/Virtual_private_network) for added security. Here is an [article](https://www.wired.com/story/best-vpn/) about different VPN options [@gilbertson_4_2021].
+Some servers will require that your computer be authorized for access for added security. It's a good idea to follow these rules. If you can, perhaps authorize a laptop in case you might need to gain access when you are out of town. However, if you do so, make sure you also only access such servers with a secure WiFi network. One way to ensure this is is to avoid using public WiFi networks. If you must use a public WiFi network, consider using a [virtual private network (VPN)](https://en.wikipedia.org/wiki/Virtual_private_network) for added security. Here is an [article](https://www.wired.com/story/best-vpn/) about different VPN options [@gilbertson_4_2021].
 
  -  Do not alter security settings without authorization.
  
-Loosening security settings could pose a risk to the data stored on the server. On the other hand, making more strict security settings could cause other users to not be able to perform their work. Contact the managers of the resource if you think changes need to be made.
+Loosening security settings could pose a risk to the data stored on the server. On the other hand, changing the security settings to become stricter could hinder other users from performing their work. Contact the managers of the resource if you think changes need to be made.
 
  - Immediately report any data security concerns.
  
-To protect the integrity of your data and your colleagues, be sure to report anything strange about the shared computing resource to those who manage it so that they can address it right away. Also report to them if you have any security breaches on the computer(s) that you use to access the shared computing resource.
+To protect the integrity of your and your colleagues' data, be sure to report anything strange about the shared computing resource to those who manage it, so that they can address it right away. Also report to them if you experience any security breaches on the computer(s) that you use to access the shared computing resource.
 
 ### Overall use guidelines
 
@@ -68,11 +68,11 @@ Now that we know how to keep the resource safe, let's next talk about general us
 
  - Don't install software unless you have permission.
  
-It is possible that the software you want to use might already be installed somewhere on the shared computing resource that you are unaware about. In addition, if you install a different version of a software program, it is possible that this version (especially if it is newer) will get automatically called by other people's scripts. This could actually break their scripts or modify their results. They may have a reason to use an older version of that software, do not assume that they necessarily want the updated version. Instead, let the managers of the resource know. They can inform other users and make sure that everyone's work will not be disrupted.
+It is possible that the software you want to use might already be installed somewhere on the shared computing resource that you are unaware about. In addition, if you install a different version of a software program, it is possible that the different version (especially if it is newer) will get automatically called by other people's scripts that build on a previous version of the program. This could break their scripts or modify their results. They may have a reason to use an older version of that software, do not assume that they necessarily want the updated version. Instead, let the managers of the resource know. They can inform other users and make sure that everyone's work will not be disrupted.
  
  - Don't use the server for storage or computation that you are not authorized for.
 
-This is often a rule for shared computing resources, simply because such shared resources are intended for a specific reason and likely funded for that reason. Such resources are costly, and therefore the computational power should be used only for what it is intended for, otherwise people may view the use of the resources for other purposes as essentially theft.
+This is often a rule for shared computing resources, simply because such shared resources are intended for a specific reason and likely funded for that reason. Such resources are costly, and therefore the computational power should be used only for its intended purpose. Using these valuable resources for other purposes can sometimes be viewed as theft.
 
  - Don't alter configurations without authorization.
  
@@ -91,37 +91,44 @@ When you submit jobs, make sure you follow the following guidelines. Again consi
  
  - Think about memory allocation and efficiency.
  
-Consider how much RAM and storage is available for people on the shared computing resource. Try not to overload the resource with a really intensive job or jobs that will use most of the resources and either slow down the efficiency of the work for others or not allow them to perform their work at all.
+Consider how much RAM and storage is available for people on the shared computing resource. Try not to overload the resource with a very intensive job! Jobs that use most of the resources may either slow down the efficiency of the work for others or not allow them to perform their work at all.
 
-This involves:
+Specifically, the etiquette regarding memory allocation includes:
 
    * Not using too many nodes if you don't need to
    * Not using too much RAM on a given node or overall if you don't need to
    * Not submitting too many jobs at once
    * Communicating with others to give them advanced warning if you are going to submit large or intensive jobs
    
-If you have a really large job that you need to perform, talk with the managers of the resource so that you can work out a time when perhaps fewer users would be inconvenienced. Consult the guidelines for your particular resource about how one let's people know about large jobs before you email the administrators of the resource directly. Often their are communications systems in place for users to let each other know about large jobs.
+If you have a really large job that you need to perform, talk with the managers of the resource so that you can work out a time when perhaps fewer users would be inconvenienced. Consult the guidelines for your particular resource about how one lets people know about large jobs before you email the administrators of the resource directly. Often there are communications systems in place for users to let each other know about large jobs.
+
+The illustration below depicts how timing can affect the user experience in using a shared resource. If many people are using the same resource at the same time, especially using up the resources with a heavy task, it might slow down other users, or hinder them from performing their jobs fully. It might be a good idea to target a time frame when you know the resource will likely be less crowded.
+
+```{r, fig.align='center', echo = FALSE, fig.alt= "This diagram depicts how efficiency or job speed can differ according to how many people are using the resource, and according to which jobs are being run using the resource. If you need to run a task that needs computing power, it might be a good idea to use a time when you know fewer people will be using the resource at the same time.", out.width="100%"}
+ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.g11bbe6ab7c6_0_4")
+```
+
 
 ### Communication Guidelines
 
-Speaking of communication, let's dive into that deeper for a bit.
+Speaking of communication, let's dive into this subject deeper for a bit.
 
 - Use the proper order for communication.
 
-Often shared resources have rules about how they want people to communicate. For example for some resources it is suggested that you first ask your friends and colleagues if you are confused about something, then consult any available forums, if that does not work then directly email the administrators/managers of the resource. Keep in mind that these people are very busy and get lots of communications. 
+Often shared resources have rules about how they want people to communicate. For example, for some resources, it is suggested that you first ask your friends and colleagues if you are confused about something, then consult any available forums; if that does not work, then directly email the administrators/managers of the resource. Keep in mind that these people are very busy and get lots of emails and inquiries. 
 
 - Use the ticket system
 
-If a resource has a ticket system for users to get support, use it instead of communicating by email. If such a system is in place, then the administrators running it are used to getting requests this way. If you email directly, you may not receive feedback in a timely manner or the email might get lost.
+If a resource has a ticket system for users to get support, use it instead of communicating by email. If such a system is in place, then the administrators running it are used to getting requests this way. If you email directly, you may not receive feedback in a timely manner, or the email might get lost.
 
 ### Specific Rules
 
-Ultimately it is very important to learn about the rules, practices, and etiquette for the resource that you are using and to follow them. Otherwise, you could lose access. Often other users are a great resource!
+Ultimately, it is very important to learn about the rules, practices, and etiquette for the resource that you are using and to follow them. Otherwise, you could lose access. Other users are also a great resource!
  
 
 ## Interacting with shared resources
 
-Often you will need to use the command line to interact with a server from your personal computer. To do so on a Mac or a Linux computer you can typically do so using the terminal program that is already on your computer. For PC or Windows computer users, you can use programs like [MobaXterm](http://mobaxterm.mobatek.net/).
+Often you will need to use the command line to interact with a server from your personal computer. To do so on a Mac or a Linux computer, you can typically use the terminal program that is already on your computer. For PC or Windows computer users, you can use programs like [MobaXterm](http://mobaxterm.mobatek.net/).
 
 If you wish to run a program with a graphical interface, then you might need to have a program to help you do so. On Macs, you can download [XQuartz](http://xquartz.macosforge.org/landing/). If you use MobaXterm on your PC or Windows computer, then you will already be set. Linux computers also typically should already have what you need.
 
@@ -134,7 +141,7 @@ knitr::include_url(url = "https://files.fosswire.com/2007/08/fwunixref.pdf")
 
 ## Running Jobs
 
-Typically a program is used to schedule jobs. Remember that jobs are the individual computational tasks that you ask the server to run. For example, this could be something as simple as moving large files from one directory to another or as complex as running a complicated script on a file. 
+Typically a program is used to schedule jobs. Remember that jobs are the individual computational tasks that you ask the server to run. For example, this could be something as simple as moving large files from one directory to another, or as complex as running a complicated script on a file. 
 
 Such job scheduling programs assign jobs to available node resources as they become available and if they have the required resources to meet the job. These programs have their own commands for running jobs, checking resources, and checking jobs. Remember to use the management system to run your jobs using the compute nodes not the login nodes (nodes for users to log in). There are often nodes set up for transferring files as well. 
 
@@ -142,30 +149,32 @@ In the case of the JHPCE, a program called Sun Grid Engine (SGE) is used, but th
 
 ### Specifying memory (RAM) needs
 
-Often there is a default file size limit for jobs. For example the JHPCE has a 10GB file size limit for jobs. You may need to specify when you have a job using a file that exceeds the file size limit and set the file size for that job. As you may recall if you are using whole genome files you are likely to exceed the default file limit size. Often you are also given a default amount of RAM for your job as well. Again, you can typically run a job with more RAM if you specify. Similar to the file size limit, you will likely need to set the RAM that you will need for your job if it is above the default limit. Often this involves setting a lower and upper limit to the RAM that your job can use. If your job exceeds that amount of RAM it will be stopped. Typically people call stopping a job "killing" it. The lower and upper limit can be the same number.
+Often there is a default file size limit for jobs. For example, the JHPCE has a 10GB file size limit for jobs. When you have a job using a file that exceeds that limit, you may need to specify and set the file size accordingly for that job. As you may recall, if you are using whole genome files you are likely to exceed the default file limit size. 
+
+In addition to the file size limit, you are often also given a default amount of RAM for your job as well. Again, you can typically run a job with more RAM if you specify. Similar to the file size limit, you will likely need to set the RAM that you will need for your job if it is above the default limit. This involves setting a lower and upper limit to the RAM that your job can use. If your job exceeds that amount of RAM it will be stopped. Typically people call stopping a job "killing" it. The lower and upper limit can be the same number.
 
-How do you know how much RAM to assign to your job? Well if you are performing a job with files that are two times the size of the file size default limit, then it might make sense to double the RAM you would typically use. It's also a good idea to test on one file first if you are going to perform the same job on multiple files. You can then assess how much RAM the job used. First try to perform the job with lower limits and progressively increase until you see that the job was successful and not killed for exceeding the limit.  Keep in mind however how much RAM there is on each node. Remember, it is important to not ask for all the RAM on a single node or core on that node, as this will result in you hogging that node and other users will not be able to use RAM on that node or core on that node. Remember that you will likely have the option to use multiple cores, this can also help you to use less RAM across each core. For example, a job that needs 120GB of RAM could use 10 cores with 12 GB of RAM each.
+How do you know how much RAM to assign to your job? Well, if you are performing a job with files that are two times the size of the file size default limit, then it might make sense to double the RAM you would typically use. **It's also a good idea to test on one file first if you are going to perform the same job on multiple files.** You can then assess how much RAM the job used. First, try to perform the job with lower limits, then progressively increase the size until you see that the job was successful and not killed for exceeding the limit. Keep in mind, however, how much RAM there is on each node. Remember, it is important not to ask for all the RAM on a single node or core on that node, as this will result in you hogging that node and other users will not be able to use RAM on that node or core on that node. Remember that you will likely have the option to use multiple cores. This can also help you to use less RAM across each core. For example, a job that needs 120GB of RAM could use 10 cores with 12 GB of RAM each.
 
-Often there will be a limit for the number of jobs, the amount of RAM, and the number of cores that a single user can use beyond the default limits. This is to ensure that a user doesn't use too many resources causing others to not be able to perform their jobs. Check to see what these limits are and then figure out what the appropriate way is to contact to request for more. Again communication standards and workflows may vary based on the resource.
+Often there will be a limit for the number of jobs, the amount of RAM, and the number of cores that a single user can use beyond the default limits. This is to ensure that a user doesn't use too many resources causing others to not be able to perform their jobs. Check to see what these limits are, and then determine the appropriate way to contact to request for more. Again, communication standards and workflows may vary based on the resource.
 
 ### Checking status
 
- It's also a good idea to check the status of your jobs to see if they worked or got killed. You can check for the expected file outputs or there are commands for the server management software that can help you check currently running jobs.
+It's also a good idea to check the status of your jobs to see if they worked or got killed. You can check for the expected file outputs or there are commands for the server management software that can help you check currently running jobs.
 
 ## Storage
 
-Often you will be given a home directory which will likely be backed up, however, other storage directories often will not be. Be careful about where you store your data, as some directories might be for temporary use and get wiped to keep space available for others.
+Often you will be given a home directory which will likely be backed up. However, other storage directories often will not be. Be careful about where you store your data, as some directories might be for temporary use and get wiped to keep space available for others.
 
 ## Conclusion
 
-We hope that this chapter has given you some more knowledge about why and how traditional shared computing resources are shared.
+We hope that this chapter has given you some more knowledge about why and how more traditional shared computing resources are shared.
 
 In conclusion, here are some of the major take-home messages:
 
 1) Shared resources like high performance computing clusters need regulations so that computing resources are shared fairly to allow everyone to get the most work done.
 2) Paying attention to security is important to keep everyone's data and work on the server safe.
-3) Although we provided general guidelines, there are likely to be specific guidelines for other resources that you need to adhere to.
+3) Although we provided general guidelines, there are likely to be specific guidelines for specific resources that you need to adhere to.
 4) Often such resources have a communication process to avoid overloading resource administrators/mangers with too many requests. Be sure to follow the appropriate communication etiquette for the resources that you work with.
-3) Although there are generally default limits for jobs, users can often consult with the appropriate communication infrastructure to ask to perform larger jobs.
+5) Although there are generally default limits for jobs, users can often consult with the appropriate communication infrastructure to ask to perform larger jobs.
 
 
diff --git a/06-General_Platforms.Rmd b/06-General_Platforms.Rmd
index d5ec267..c011b28 100644
--- a/06-General_Platforms.Rmd
+++ b/06-General_Platforms.Rmd
@@ -6,11 +6,11 @@ ottrpal::set_knitr_image_path()
 
 # Research Platforms
 
-In this chapter we will provide examples of computing platforms that are designed to help researchers and that you might find useful for your work. Please note that we aim to provide a general overview of options and thus this is not a complete list. Let us know if there is a platform or system that you think we should include!
+In this chapter, we will provide examples of computing platforms designed to help researchers. You might find these platforms useful for your work. Please note that we aim to provide a general overview of options, and thus, this is not a complete list. Let us know if there is a platform or system that you think we should include!
 
 <div class = "warning">
 
- We highly suggest you also **read the next chapter**, which will point out important considerations to think about when deciding to work on a shared computing resource platform like those discussed in this chapter.
+We highly suggest you also **read the next chapter**, which will point out important considerations to think about when deciding to work on a shared computing resource platform, like those discussed in this chapter.
 
 </div>
 
@@ -21,9 +21,13 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 The major advantage of these platforms is that users can analyze data where it lives, as many platforms host public data. However, some also allow you to upload your own data.  There is less need for data transfers back and forth to your personal computer, as you can analyze your data, store your data and share it in one place, saving time. Users can sometimes also share how they did their analysis as well, improving reproducibility practices. Additionally, another advantage is that some of these platforms also provide educational material on how to work with data.
 
-Many offer a [graphical user interface](https://www.omnisci.com/technical-glossary/graphical-user-interface) also simply called just graphical interface or GUI, allows for users to choose functions to perform by interacting with visual representations, which can be useful for individuals how are less comfortable writing code. They have a "user-centered" design that creates a visual environment where users can for example **click on** tabs, boxes, or icons for to perform functions. This also often allows users to more directly see plots and other types of visualizations.
+Many offer a [graphical user interface](https://www.omnisci.com/technical-glossary/graphical-user-interface) also simply called just graphical interface or GUI (side note: GUI is pronounced like the word "gooey", as if it's a sticky jelly stuck to the monitor!), allows for users to choose functions to perform by interacting with visual representations, which can be useful for individuals how are less comfortable writing code. They have a "user-centered" design that creates a visual environment where users can for example **click on** tabs, boxes, or icons for to perform functions. This also often allows users to more directly see plots and other types of visualizations.
 
-Some platforms also offer a [command line interface](https://searchwindowsserver.techtarget.com/definition/command-line-interface-CLI) (also known as a character interface) which allows for software functions to be performed by specifying through commands written in text. This typically offers more control than a GUI, however command line interfaces are often less user friendly as they require that the user know the correct commands to use.
+```{r, fig.align='center', echo = FALSE, fig.alt= "Think of the GUI (i.e. gooey) as the sticky goo that 'sticks' to the monitor, and helps you navigate your interactions with the computer!", out.width="100%"}
+ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.g11bbe6ab7c6_0_10")
+```
+
+Some platforms also offer a [command line interface](https://searchwindowsserver.techtarget.com/definition/command-line-interface-CLI) (also known as a character interface) which allows for software functions to be performed by specifying through commands written in text. This typically offers more control than a GUI; however, command line interfaces are often less user-friendly as they require that the user know the correct commands to use.
 
 
 ### National Cancer Institute Cloud Resources
@@ -32,31 +36,31 @@ Funded by the [National Cancer Institute (NCI)](https://www.cancer.gov/), the [c
 
 ### Cancer Genomics Cloud
 
-The [Cancer Genomics Cloud (CGC)](https://www.cancergenomicscloud.org/) is a computing platform that researchers can used to analyze, store, and share their own data, as well as work with large public and controlled cancer data sets, including genomic and imaging data. CGC offers tutorials and guides to help research get started, as well as $300 of free credits to use the platform and test it out. Users can also access many tools and workflows to help them perform there analyses. CGC also offers regular [webinars](https://www.cancergenomicscloud.org/webinars). 
+The [Cancer Genomics Cloud (CGC)](https://www.cancergenomicscloud.org/) is a computing platform that researchers can use to analyze, store, and share their own data, as well as work with large public and controlled cancer data sets, including genomic and imaging data. CGC offers tutorials and guides to help research get started, as well as $300 of free credits to use the platform and test it out. Users can also access many tools and workflows to help them perform there analyses. CGC also offers regular [webinars](https://www.cancergenomicscloud.org/webinars). 
 
-The platform is based on a partnership with [Seven Bridges](https://www.sevenbridges.com/), a biomedical analytics company, and can be accessed simply by using a web browser. Users can can use a point and click system also called a graphical user interface (GUI) or can access resources using the command line. See this [link](https://www.cancergenomicscloud.org/getting-started) to learn more.
+The platform is based on a partnership with [Seven Bridges](https://www.sevenbridges.com/), a biomedical analytics company, and can be accessed simply by using a web browser. Users can use a point-and-click system (GUI) or access resources using the command line. See this [link](https://www.cancergenomicscloud.org/getting-started) to learn more.
 
 
 ### Institute for Systems Biology (ISB) Cancer Gateway in the Cloud
 
-The [ISB-CRC](https://isb-cgc.appspot.com/) platform allows users to browse and data from the [Genomic Data Commons](https://gdc.cancer.gov/) and other sources, including sequencing and imaging data both public and controlled. They provide access pipeline tools, as well as to pipelines, workflows, and Notebooks written by others in R and Python to help users perform analyses. ISB also offers $300 in [free credits](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowtoRequestCloudCredits.html) to try out the platform. See [here](https://isb-cgc.appspot.com/how_to_discover/#0) for a user guide.
+The [ISB-CRC](https://isb-cgc.appspot.com/) platform allows users to browse and data from the [Genomic Data Commons](https://gdc.cancer.gov/) and other sources, including sequencing and imaging data that are both public and controlled. They provide access pipeline tools, as well as to pipelines, workflows, and Notebooks written by others in R and Python to help users perform analyses. ISB also offers $300 in [free credits](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowtoRequestCloudCredits.html) to try out the platform. See [here](https://isb-cgc.appspot.com/how_to_discover/#0) for a user guide.
 
 
 ### Broad Institute FireCloud
 
-[FireCloud](https://portal.firecloud.org/) provides users with computing resources and access to workspaces using Broad's tools and pipelines. Users can run large scale analyses and work with collaborators. FireCloud offers access to [The Cancer Genome Atlas (TCGA)](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) controlled-access data. Other platforms like Galaxy and Terra described next, share resources with FireCloud. 
+[FireCloud](https://portal.firecloud.org/) provides users with computing resources and access to workspaces using Broad's tools and pipelines. Users can run large scale analyses and work with collaborators. FireCloud offers access to [The Cancer Genome Atlas (TCGA)](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) controlled-access data. Other platforms described next, like Galaxy and Terra, share resources with FireCloud. 
 
 ### Galaxy
 
 This section was written by [Jeremy Goecks](https://www.goeckslab.org/members/jeremy-goecks.html):
 
-Galaxy is a web-based computational workbench that connects analysis tools, biomedical datasets, computing resources, a graphical user interface, and a programmatic API. Galaxy (https://galaxyproject.org/) enables accessible, reproducible, and collaborative biomedical data science by anyone regardless of their informatics expertise. There are more than 8,000 analysis tools and 200 visualizations integrated into Galaxy that can be used to process a wide variety of biomedical datasets. This includes tools for analyzing genomic, transcriptomic (RNA-seq), proteomic, metabolomic, microbiome, and imaging datasets, tool suites for single-cell omics and machine learning, and thousands of more tools. Galaxy’s graphical user interface can be used with only a web browser, and there is a programmatic API for performing scripted and automated analyses with Galaxy.
+Galaxy is a web-based computational workbench that connects analysis tools, biomedical datasets, computing resources, a graphical user interface, and a programmatic API. Galaxy (https://galaxyproject.org/) enables accessible, reproducible, and collaborative biomedical data science regardless of a user's informatics expertise. There are more than 8,000 analysis tools and 200 visualizations integrated into Galaxy that can be used to process a wide variety of biomedical datasets. This includes tools for analyzing genomic, transcriptomic (RNA-seq), proteomic, metabolomic, microbiome, and imaging datasets, tool suites for single-cell omics and machine learning, and thousands of more tools. Galaxy’s graphical user interface can be used simply through a web browser, and there is a programmatic API for performing scripted and automated analyses with Galaxy.
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Galaxy can be accessed through a web browser and provides users with access to tools, datasets, computing resources, a graphical user interface (GUI) for users who would like to interact with Galaxy by clicking buttons and using drop-down menus and a programmtic API for users that would like to write code to interact with Galaxy", out.width= "100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.gfb2e21ecdc_0_131")
 ```
 
-Galaxy is used daily by thousands of scientists across the world. A vibrant Galaxy community has deployed hundreds of Galaxy servers across the world, including more than 150 public and three large national/international servers in the United States, Europe, and Australia (https://usegalaxy.org, https://usegalaxy.eu, https://usegalaxy.org.au). The three national/international servers have more than 250,000 registered users who execute >500,000 analysis jobs each month. Galaxy has been cited more than 10,000 times with >20% from papers related to cancer. The Galaxy Tool Shed (https://usegalaxy.org/toolshed) provides a central location where developers can upload tools and visualizations and users can search and install tools and visualizations into any Galaxy server. Galaxy has a large presence in the cancer research community. Galaxy serves as an integration and/or analysis platform for 7 projects in the NCI ITCR program. There is also increasing use of Galaxy in key NIH initiatives such as the NCI Cancer Moonshot Human Tumor Atlas Network (HTAN) and the NHGRI Data Commons, called the AnVIL (https://anvilproject.org/).
+Galaxy is used daily by thousands of scientists across the world. A vibrant Galaxy community has deployed hundreds of Galaxy servers across the world, including more than 150 public and three large national/international servers in the United States, Europe, and Australia (https://usegalaxy.org, https://usegalaxy.eu, https://usegalaxy.org.au). The three national/international servers have more than 250,000 registered users who execute >500,000 analysis jobs each month. Galaxy has been cited more than 10,000 times with >20% from papers related to cancer. The Galaxy Tool Shed (https://usegalaxy.org/toolshed) provides a central location where developers can upload tools and visualizations, and users can search and install tools and visualizations into any Galaxy server. Galaxy has a large presence in the cancer research community. Galaxy serves as an integration and/or analysis platform for 7 projects in the NCI ITCR program. There is also increasing use of Galaxy in key NIH initiatives such as the NCI Cancer Moonshot Human Tumor Atlas Network (HTAN) and the NHGRI Data Commons, called the AnVIL (https://anvilproject.org/).
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Map of the 3 Galaxy servers", out.width= "100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.gfb2e21ecdc_0_135")
@@ -77,18 +81,18 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 ```
 
 
-Galaxy users can share all their work—analysis histories, workflows, and visualizations—via simple URLs that are available to specific colleagues or a link that anyone can access. Galaxy’s user interface is highly scalable. Tens, hundreds, or even thousands of datasets can be grouped into collections and run in parallel using individual tools or multi-tool workflows. In summary, Galaxy is a popular computational workbench with tools and features for a wide variety of data analyses, and it has broad usage in cancer data analysis.
+Galaxy users can share all their work—analysis histories, workflows, and visualizations—via simple URLs. They can be shared to specific colleagues or to anyone. Furthermore, Galaxy’s user interface is highly scalable. Tens, hundreds, or even thousands of datasets can be grouped into collections and run in parallel using individual tools or multi-tool workflows. In summary, Galaxy is a popular computational workbench with tools and features for a wide variety of data analyses, and it has broad usage in cancer data analysis.
 
 See [here](https://toolshed.g2.bx.psu.edu/) for the list of applications supported by Galaxy and [here](https://training.galaxyproject.org/) for more information on how to use Galaxy resources.
 
 
 ### Terra
 
-[Terra](https://terra.bio/) is a biomedical research computing platform that is based on the Google Cloud platform, that also allows users easier ways to manage the billing of their projects. It provides users with access to data, workflows, interactive analyses using Jupyter Notebooks, RStudio, and Galaxy, data access and tools from [FireCloud from the Broad Institute](https://firecloud.terra.bio/), as well as workspaces to organize projects and collaborate with others. Terra also has [many measures](https://terra.bio/resources/security/) to help ensure that data is secure and they offer clinical features for ensuring that [health data is protected](https://terra.bio/about/privacy/). Note that users who do upload protected health information must select to use  extra clinical features and enter a formal agree with [Terra/FireCloud](https://firecloud.terra.bio/) about their data. See [here](https://support.terra.bio/hc/en-us/articles/360024688731-Terms-of-Service) for more information.
+[Terra](https://terra.bio/) is a biomedical research computing platform that is based on the Google Cloud platform. Terra also allows users easier ways to manage the billing of their projects. It provides users with access to data, workflows, interactive analyses using Jupyter Notebooks, RStudio, and Galaxy, data access and tools from [FireCloud from the Broad Institute](https://firecloud.terra.bio/), as well as workspaces to organize projects and collaborate with others. Terra also has [many measures](https://terra.bio/resources/security/) to help ensure that data are secure. They also offer clinical features to make sure [health data is protected](https://terra.bio/about/privacy/). Note that users who do upload protected health information must select to use extra clinical features and enter a formal agreement with [Terra/FireCloud](https://firecloud.terra.bio/) about their data. See [here](https://support.terra.bio/hc/en-us/articles/360024688731-Terms-of-Service) for more information.
 
 Importantly users can get access to use [Genotype -Tissue Expression (GTEx)](https://gtexportal.org/home/), [Therapeutically Applicable Research to Generate Effective Treatments (TARGET)](https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000218.v24.p8) and [The Cancer Genome Atlas (TCGA)](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) data using the platform. See [here](https://support.terra.bio/hc/en-us/articles/4402326091675-Accessing-GTEx-TARGET-TCGA-data) for information on how. 
 
-Users can pay for data storage and computing costs for Google Cloud through Terra. Users can browse data for free.
+Users can pay for data storage and computing costs for Google Cloud through Terra. That said, browsing data is free.
 
 Check out this video for more information:
 
@@ -107,12 +111,16 @@ According to their website:
 
 > By providing a unified environment for data management and compute, AnVIL eliminates the need for data movement, allows for active threat detection and monitoring, and provides elastic, shared computing resources that can be acquired by researchers as needed.
 
-It relies on Terra for the cloud based compute environment, Dockstore for  standardized tools and workflows, Gen3 for data management for querying and organizing data, Galaxy tools and environment for analyses with less code requirements, and [Bioconductor](https://www.bioconductor.org/) tools for R programming users. [Bioconductor](https://www.bioconductor.org/) is a project with the mission to catalog, support, and disseminate bioinformatics open-source R packages. Packages have to go through a review process before being included. 
+AnVIL relies on Terra for the cloud based compute environment, Dockstore for standardized tools and workflows, Gen3 for data management for querying and organizing data, Galaxy tools and environment for analyses with less code requirements, and [Bioconductor](https://www.bioconductor.org/) tools for R programming users. 
+
+[Bioconductor](https://www.bioconductor.org/) is a project with the mission to catalog, support, and disseminate bioinformatics open-source R packages. Packages have to go through a review process before being included. 
 
 
 ## CyVerse
 
-[CyVerse](https://cyverse.rocks/about) is a  similar computing platform that also offers computing resources for storing, sharing, and working with data with a graphical interface, as well as an API. Computing was previously offered using the cloud computing platform from CyVerse called [Atmosphere](https://cyverse.org/news/refocusing-atmosphere-support-cloud-native-development), which relied on users using virtual machines. Users will now use a new version of Atmosphere with partnership with [Jetstream](https://jetstream-cloud.org/). This allows users to use containers for easier collaboration and also offers US users more computing power and storage. Originally called iPlant Collaborative, it was started by a funding from the National Science Foundation (NSF) to support life sciences research, particularly to support ecology, biodiversity, sustainability, and agriculture research. It is led by the University of Arizona, the Texas Advanced Computing Center, and Cold Spring Harbor Laboratory. It offers access to an environment for performing analyses with Jupyter (for Python mostly) and RStudio (for R mostly) and a variety of tools for Genomic data analysis. See [here](https://cyverse.atlassian.net/wiki/spaces/DEapps/pages/241882146/List+of+Applications) for a list of applications that are supported by CyVerse.  Note that you can also install tools on both platforms. Both CyVerse and Galaxy offer lots of helpful documentation, to help users get started with informatics analyses.
+
+[CyVerse](https://cyverse.rocks/about) is a similar computing platform that also offers computing resources for storing, sharing, and working with data with a graphical interface, as well as an API. Computing was previously offered using the cloud computing platform from CyVerse called [Atmosphere](https://cyverse.org/refocusing-atmosphere-to-support-cloud-native-development), which relied on users using virtual machines. Users will now use a new version of Atmosphere with partnership with [Jetstream](https://jetstream-cloud.org/). This allows users to use containers for easier collaboration and also offers US users more computing power and storage. Originally called iPlant Collaborative, it was started through funding from the National Science Foundation (NSF) to support life sciences research, particularly to support ecology, biodiversity, sustainability, and agriculture research. It is led by the University of Arizona, the Texas Advanced Computing Center, and Cold Spring Harbor Laboratory. It offers access to an environment for performing analyses with Jupyter (for Python mostly) and RStudio (for R mostly) and a variety of tools for Genomic data analysis. See [here](https://cyverse.atlassian.net/wiki/spaces/DEapps/pages/241882146/List+of+Applications) for a list of applications that are supported by CyVerse.  Note that you can also install tools on both platforms. Both CyVerse and Galaxy offer lots of helpful documentation, to help users get started with informatics analyses.
+
 
 See [here](https://learning.cyverse.org/) to learn more.
 
@@ -125,9 +133,9 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 SciServer is accessible through a web browser and allows users to store, upload, download, share, and work with data and common tools on the same platform. It was originally built for the astrophysics community (and called SkyServer) but it has now been adapted to be used by scientists of all fields and is indeed used by many in the genomics field.  It allows users to use Python and R in environments like Jupyter notebooks and RStudio, and also supports (Structured Query Language) SQL for data querying and management and is built on the use of Docker. 
 
-The main idea of SciServer, is based on this premise: "bring the analysis to the data". It is free to use after users register. However, users can buy extra resources. Users can keep data private or share their data. 
+The main idea of SciServer is based on this premise: "bring the analysis to the data". It is free to use after users register. However, users can buy extra resources. Users can keep data private or share their data. 
 
-As compared to Galaxy, this resources may be better for users with a bit more familiarity with informatics but who require more flexibility, particularly for working with collaborators such as physicists or material scientists as there are more tools supported across disciplines. In addition it also gives users access to very large data sets on Petabyte-scale (note that some of these require special permission to use) and supports developers to create their own web interfaces called SciUIs for particular use cases.
+As compared to Galaxy, these resources may be better for users with a bit more familiarity with informatics but who require more flexibility. Specifically, these resources are ideal for working with collaborators such as physicists or material scientists, as there are more tools supported across disciplines. In addition, SciServer also gives users access to very large data sets on Petabyte-scale (note that some of these require special permission to use) and supports developers to create their own web interfaces called SciUIs for particular use cases.
 
 For @sciserver_2020 for more information.  
 
@@ -140,7 +148,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 ## Materials Cloud
 
-Another resource that might be of interest to Python users, particular those who collaborate with material scientists, is Materials Cloud. It is designed to promote reproducible work, collaboration, and sharing of resources among scientists, particularly for simulations for the materials science field. Users can share data in a citable way, download data, upload data, share workflows, and perform analyzes.
+Another resource that might be of interest to Python users, particularly those who collaborate with material scientists, is Materials Cloud. It is designed to promote reproducible work, collaboration, and sharing of resources among scientists, particularly for simulations for the materials science field. Users can share data in a citable way, download data, upload data, share workflows, and perform analyses.
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Materials Cloud resources are based on allowing users to Learn about resources, Work using the resources, Discover aspects about data that is available, Explore data with interactive graphs, and archive to store and share data.", out.width= "100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.gfd56752f25_0_6")
@@ -157,11 +165,11 @@ To learn more about Materials Cloud, check out @talirz_materials_2020.
 
 ## Overture
 
-Overture is a relatively new option for perform large-scale genomic data analyses. You can upload, download, manage, analyze and share your data with authentication and authorization methods to add security. Although designed for genomic research, the [data management system](https://www.overture.bio/documentation/dms/) can be used for other scientific domains. Currently, additional products are still being developed for analysis, visualization, and sharing. However, several collaborations have created new incredible resources using some of the existing and developing products that might be useful for your research. Alternatively, Overture has options to help you create your own platform, see [here](https://www.overture.bio/services/) for more information. It is compatible with Google, Microsoft Azure, and PostgreSQL for storage options. 
+Overture is a relatively new option to perform large-scale genomic data analyses. You can upload, download, manage, analyze and share your data with authentication and authorization methods to add security. Although designed for genomic research, the [data management system](https://www.overture.bio/documentation/dms/) can be used for other scientific domains. Currently, additional products are still being developed for analysis, visualization, and sharing. However, several collaborations have created new incredible resources using some of the existing and developing products that might be useful for your research. Alternatively, Overture has options to help you create your own platform - see [here](https://www.overture.bio/services/) for more information. It is compatible with Google, Microsoft Azure, and PostgreSQL for storage options. 
 
 These collaborations using Overture products can be found on the [case studies](https://www.overture.bio/case-studies/) page of the [Overture website](https://www.overture.bio/).
 
-For example, the [Cancer Genome Collaboratory](https://cancercollaboratory.org/) is one such collaboration. This is A cloud-based resource that allows researchers to perform analyses using [International Cancer Genome Consortium (ICGC)](https://en.wikipedia.org/wiki/International_Cancer_Genome_Consortium) cancer genome data, which includes tumor mutation data from the [The Cancer Genome Atlas (TCGA)](https://en.wikipedia.org/wiki/The_Cancer_Genome_Atlas) and the [Pan-Cancer Analysis of Whole Genomes (PCAWG)](https://dcc.icgc.org/pcawg) mutation data. See [here](https://cancercollaboratory.org/services-cloud-resources) for information about billing, storage capacity, access, and security. 
+For example, the [Cancer Genome Collaboratory](https://cancercollaboratory.org/) is one such collaboration. This is a cloud-based resource that allows researchers to perform analyses using [International Cancer Genome Consortium (ICGC)](https://en.wikipedia.org/wiki/International_Cancer_Genome_Consortium) cancer genome data, which includes tumor mutation data from the [The Cancer Genome Atlas (TCGA)](https://en.wikipedia.org/wiki/The_Cancer_Genome_Atlas) and the [Pan-Cancer Analysis of Whole Genomes (PCAWG)](https://dcc.icgc.org/pcawg) mutation data. See [here](https://cancercollaboratory.org/services-cloud-resources) for information about billing, storage capacity, access, and security. 
 
 In addition, Overture products have also been used to create other data resources, such as the [Kids First Data Resource Portal](https://portal.kidsfirstdrc.org/login)  which has childhood cancer and birth defect genomic data for over 76,000 samples, and the [National Cancer Institute's Genomic Data Commons Data portal](https://portal.gdc.cancer.gov/), which also includes [The Cancer Genome Atlas (TCGA)](https://en.wikipedia.org/wiki/The_Cancer_Genome_Atlas) and [Therapeutically Applicable Research to Generate Effective Treatments (TARGET)](https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000218.v24.p8). The portal supports some basic [analyses](https://portal.gdc.cancer.gov/analysis) as well for clinical data statistics and survival analysis. 
 
@@ -169,7 +177,7 @@ In addition, Overture products have also been used to create other data resource
 
 This section was written by Brigitte Raumann:
 
-[Globus](www.globus.org) (www.globus.org) is a cloud-hosted service for secure, reliable research data management that allows data movement, synchronization, sharing, and discovery. Users access Globus services via a [web interface](https://app.globus.org/) or [command line interface](https://docs.globus.org/cli/).  Developers can [integrate Globus capabilities](https://www.globus.org/platform) into their research applications and [data portals](https://docs.globus.org/modern-research-data-portal/). 
+[Globus](www.globus.org) (www.globus.org) is a cloud-hosted service for secure, reliable research data management that allows data movement, synchronization, sharing, and discovery. Users access Globus services via a [web interface](https://app.globus.org/) or [command line interface](https://docs.globus.org/cli/). Developers can [integrate Globus capabilities](https://www.globus.org/platform) into their research applications and [data portals](https://docs.globus.org/modern-research-data-portal/). 
 
 The [Globus Transfer](https://www.globus.org/data-transfer) service provides 'fire and forget' high-performance data transfer and synchronization between storage system such as laptops, supercomputers, tape archives, HPC clusters, scientific instruments, as well as public cloud storage. Globus enables researchers to share their data without the need to create temporary collaborator accounts on local storage systems and without the need to copy data to an external file sharing service. This technology ensures that data movement and sharing of hundreds of terabytes of data, in some cases petabytes of data, can be done in a manner that ensures data confidentiality, minimizes demands on researchers’ time, and makes efficient use of available cyberinfrastructure. Transfer and sharing of [protected data](https://www.globus.org/protected-data), such as HIPAA-regulated data, is also supported. Globus can also [automate tasks](https://www.globus.org/platform/services/flows) as simple as replicating data across multiple storage systems or as intricate as managing multiple conditional data analysis and results distribution tasks, with optional human intervention where needed for review and confirmation. 
 
@@ -181,12 +189,12 @@ The University of Chicago develops and operates Globus and provides free file tr
 
 ## BaseSpace Sequence Hub
 
-[BaseSpace](https://basespace.illumina.com/) is a platform that allows for data analysis of Illumina sequencing data and syncs easily with any Illumina sequencing machines that you might work with. There are many [applications](https://www.illumina.com/products/by-type/informatics-products/basespace-sequence-hub/apps.html) available to help you with your genomics research. They offer a 30 day free trial.
+[BaseSpace](https://basespace.illumina.com/) is a platform that allows for data analysis of Illumina sequencing data and syncs easily with any Illumina sequencing machines that you might work with. There are many [applications](https://www.illumina.com/products/by-type/informatics-products/basespace-sequence-hub/apps.html) available to help you with your genomics research. They offer a 30-day free trial.
 
 
 ## ATLAS.ti
 
-[ATLAS.ti](https://atlasti.com/) is designed particularly for qualitative analysis. You can use a variety of data types including video, audio, images, surveys, and social media data. A variety of tools, particularly for text data analysis are provided for methods such as [sentiment analysis](https://en.wikipedia.org/wiki/Sentiment_analysis), which is the process of assigning a general tone or feeling to text and [named-entity recognition](https://en.wikipedia.org/wiki/Named-entity_recognition), which is the process of extracting certain characteristics from texts that are what is called a [named entity] or a real-world object - such as a person's name or address. Such analyses can be helpful for understanding behaviors that might be associated with cancer risk. Although this type of analysis can be performed using R or Python among other coding languages, ATLAS.ti offers a nice graphical user interface to perform these types of analyses.Furthermore ATLAS.ti offers a great deal of flexibility about such analyses using different data types easily.
+[ATLAS.ti](https://atlasti.com/) is designed particularly for qualitative analysis. You can use a variety of data types including video, audio, images, surveys, and social media data. A variety of tools, particularly for text data analysis, are provided for methods such as [sentiment analysis](https://en.wikipedia.org/wiki/Sentiment_analysis), which is the process of assigning a general tone or feeling to text and [named-entity recognition](https://en.wikipedia.org/wiki/Named-entity_recognition), which is the process of extracting certain characteristics from texts that are what is called a [named entity] or a real-world object - such as a person's name or address. Such analyses can be helpful for understanding behaviors that might be associated with cancer risk. Although this type of analysis can be performed using R or Python among other coding languages, ATLAS.ti offers a nice graphical user interface to perform these types of analyses. Furthermore, ATLAS.ti offers a great deal of flexibility about such analyses, enabling users to easily incorporate different data types.
 
 ```{r, echo = FALSE, fig.alt= "Cheat sheet of Unix commands at https://files.fosswire.com/2007/08/fwunixref.pdf"}
 knitr::include_url(url = "https://downloads.atlasti.com/docs/branding/atlasti_brochure_v9_EN_interactive_202110.pdf")
@@ -202,7 +210,7 @@ See [here](https://www.genepattern.org/user-guide) to access their user guide an
 
 ## XNAT
 
-[XNAT](https://www.xnat.org/about/) offers computing resources and tools for performing imaging analysis and for storing and sharing imaging data in a HIPAA complaint manner (more on that in the coming). Developed by the [Bukner lab](https://cnl.rc.fas.harvard.edu/) previously at the Washington University and now at Harvard, it supports a variety of imaging data as well as other data types like clinical data.  Some tools can be used with a graphical interface and others with the command-line. See [here](https://wiki.xnat.org/documentation/case-studies) for example use cases. There is also a great deal of documentation available about how to use the tools and resources available at https://wiki.xnat.org/documentation.
+[XNAT](https://www.xnat.org/about/) offers computing resources and tools for performing imaging analysis and for storing and sharing imaging data in a HIPAA complaint manner (more on that in the coming). Developed by the [Bukner lab](https://cnl.rc.fas.harvard.edu/) previously at the Washington University and now at Harvard, it supports a variety of imaging data as well as other data types like clinical data. Some tools can be used with a graphical interface and others with the command-line. See [here](https://wiki.xnat.org/documentation/case-studies) for example use cases. There is also a great deal of documentation available about how to use the tools and resources available at https://wiki.xnat.org/documentation.
 
 ```{r, fig.align="center", fig.alt = "video", echo=FALSE, out.width="100%"}
 knitr::include_url("https://www.youtube.com/embed/ENk589mOkhI")
@@ -224,7 +232,7 @@ For those interested, Gordon Harris and others are also working on a project cal
 
 ## PRISM
 
-The Platform for Imaging in Precision Medicine called PRISM works behind the scenes in the Cancer Imaging Archive (TCIA) to allow users to work with the vast data available in TCIA, in terms of both imaging data and clinical information.  
+The Platform for Imaging in Precision Medicine, called PRISM, works behind the scenes in the Cancer Imaging Archive (TCIA) to allow users to work with the vast data available in TCIA, in terms of both imaging data and clinical information.  
 
 According to Fred Prior:
 
@@ -238,7 +246,7 @@ See this [article](https://ascopubs.org/doi/full/10.1200/CCI.20.00001) for more
 
 ## Conclusion
 
-We hope that this chapter has given you some more perspective on how the various computing options available designed for researchers like you. We also hope that you may have learned about another platform that can help you to make your research faster and more flexible.
+We hope that this chapter has given you some more perspective on the various computing options available designed for researchers like you. We also hope that you may have learned about some potential platforms that can help you to make your research faster and more flexible.
 
 In conclusion, here are some of the major take-home messages:
 
diff --git a/07-Computing_Decisions.Rmd b/07-Computing_Decisions.Rmd
index 97a5f2d..67463a7 100644
--- a/07-Computing_Decisions.Rmd
+++ b/07-Computing_Decisions.Rmd
@@ -7,7 +7,7 @@ ottrpal::set_knitr_image_path()
 
 # Computing Resource Decisions
 
-Now that we have discussed a bit about how computers perform computations and described a bit about computing options, lets discuss more about how you might choose the right computing resources for your work. In this chapter we will discuss aspects that you should consider when deciding between different computing resource options. 
+Now that we have discussed a bit about how computers perform computations and described a bit about computing options, let's discuss more about how you might choose the right computing resources for your work. In this chapter, we will discuss aspects that you should consider when deciding between different computing resource options. 
 
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Learning Objectives: 1. Recognize the main aspects to focus on when deciding on what computing systems to use. 2. Be aware of the benefits and drawbacks of various options. 3. Know what to watch out for in computing decisions.", out.width="100%"}
@@ -15,7 +15,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 ```
 
 
-To afford you the best opportunity to perform the informatics research that you would like, it is useful to become familiar with the benefits and drawbacks of various computing options. First we will start out with some general considerations that you should think about when beginning to determine what computing option makes sense for your work.
+To help you make an informed decision about computing resources, it is useful to become familiar with the benefits and drawbacks of various computing options. First, we will start out with some general considerations that you should think about when beginning to determine what computing option makes sense for your work.
 
 The following are the major decision points for your computing needs:
 
@@ -44,11 +44,11 @@ Let's take a bit of a deeper dive now for each of these considerations.
 
 ### Computation needs
 
-Now that you know more about determining your personal computer's computing and storage capacity, as well as how to determine or estimate the files sizes that you might use for your research, you can begin to assess if your personal computer is up to the task. When determining what your computing needs might be, remember to evaluate how many files you might use in your analyses, the file sizes, the amount of RAM and CPUs (and possibly GPUs that your computer has) and some level of understanding for how intensive the computing tasks are that you plan to perform. How do you assess this?  If the files that you intend to use in your analysis are quite large for your computer's storage capacity, then it is likely that your computer might struggle to work with such files. This might also be the case if you plan to use many smaller files (such as hundreds or thousands, but smaller files can add up quickly). Finally, if you plan to perform many steps on your files in your analysis this may also require more computing resources than you have available on your current personal computer. Shared computing options will generally have the capacity to allow you to do your work, unless you have very large data needs and you hope to use a very specialized computing platform that may not support large-scale work. Checking with the local or remote computing options that you are interested in about the computing capacity ahead of time before you start an analysis if you have large data analysis plans would be a good idea. Cloud computing options can be great if you need more efficiency, as there are no job queues to worry about like with other more traditional shared resources. 
+Now that you know more about determining your personal computer's computing and storage capacity, as well as how to determine or estimate the files sizes that you might use for your research, you can begin to assess if your personal computer is up to the task. When determining what your computing needs might be, remember to evaluate how many files you might use in your analyses, the file sizes, the amount of RAM and CPUs (and possibly GPUs that your computer has) and some level of understanding for how intensive the computing tasks are that you plan to perform. How do you assess this? If the files that you intend to use in your analysis are quite large for your computer's storage capacity, then it is likely that your computer might struggle to work with such files. This might also be the case if you plan to use many smaller files (such as hundreds or thousands, but smaller files can add up quickly). Finally, if you plan to perform many steps on your files in your analysis, this may also require more computing resources than you have available on your current personal computer. Shared computing options will generally have the capacity to allow you to do your work, unless you have very large data needs. If you have plans to analyze large datasets, it would be a good idea to check the computing capacity of the local or remote computing options that you are interested in before you start an analysis. Cloud computing options can be great if you need more efficiency, as there are no job queues to worry about like with more traditional shared resources. 
 
 ### Data storage
 
-Again, now that you know how to assess the data storage potential of your computer, you can decide if your computer can handle storing all the files that you might wish to use in your analysis. Think about your current data analysis plans but keep in mind your future plans as well. If you hope to replicate experiments with more samples, you might run out of storage. One way around this is to by external additional storage (which is also a good idea for backing up your data!). However, if you think that you might have much larger scale research plans in the future, you might want to think about shared computing options. Cloud computing platforms and more traditional servers have different storage capacities, so it is worth checking out the options that might be helpful for your research. Also keep in mind that it will take time to transfer your data, especially if your data is very large.
+Again, now that you know how to assess the data storage potential of your computer, you can decide if your computer can handle storing all the files that you might wish to use in your analysis. Think about your current data analysis plans, but keep in mind your future plans as well. If you hope to replicate experiments with more samples, you might run out of storage. One way around this is to by external additional storage (which is also a good idea for backing up your data!). However, if you think that you might have much larger scale research plans in the future, you might want to think about shared computing options. Cloud computing platforms and more traditional servers have different storage capacities, so it is worth checking out the options that might be helpful for your research. Also keep in mind that it will take time to transfer your data, especially if your data is very large.
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Even if your current personal computer can handle your computing and storage needs – consider if you will need more in the near future.", out.width="100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.g117b5133acc_71_57")
@@ -58,7 +58,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 ### Multi-institute collaboration
 
-If you plan to work with others outside of your institute that would not have access to the same local shared computing resources, then remote computing options would be really helpful for allowing your collaborators to work on the same data together. Cloud platforms especially make it easier for collaboration, as everyone can share the exact same computational environment including hardware, software, and datasets.
+If you plan to work with others outside of your institute that would not have access to the same local shared computing resources, then remote computing options would be really helpful for allowing your collaborators to work on the same data together. Cloud platforms make it especially easier for collaboration, as everyone can share the exact same computational environment, including hardware, software, and datasets.
 
 
 
@@ -90,7 +90,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 ### Costs
 
-Often local shared computing resources at your institute can be much less expensive than some of the common cloud computing options. However, this is not always the case and if you have very specific analysis goals in mind, the benefit of cloud computing resources, is that you typically only pay for the resources that you actually use. This does also involve learning how costs are calculated for the particular cloud resource, which can be a challenge, but many cloud platforms that were designed for research such as [Jetstream](https://jetstream-cloud.org/) or [Terra/AnVIL](https://support.terra.bio/hc/en-us/articles/360029772212-Controlling-Cloud-costs-sample-use-cases) can be very affordable and in some cases some small platforms offer free resources or free trials initially to start.  
+Often local shared computing resources at your institute can be much less expensive than some of the common cloud computing options. However, this is not always the case and if you have very specific analysis goals in mind, the benefit of cloud computing resources is that you typically only pay for the resources that you actually use. This also involves learning how costs are calculated for the particular cloud resource, which can be a challenge, but many cloud platforms that were designed for research such as [Jetstream](https://jetstream-cloud.org/) or [Terra/AnVIL](https://support.terra.bio/hc/en-us/articles/360029772212-Controlling-Cloud-costs-sample-use-cases) can be very affordable; also, some small platforms offer free resources or free trials initially to start.  
 
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "How much money can I spend? Different resources will have different associated costs. Often local shared resources are affordable. If your personal computer can handle your computing tasks, you can also consider using it. Many remote options have free storage up to an amount.", out.width="100%"}
@@ -100,7 +100,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 ### Extra guidance
 
-Cloud computing platforms such as [Galaxy](https://galaxyproject.org/), [AnVIL](https://support.terra.bio/hc/en-us/articles/360029772212-Controlling-Cloud-costs-sample-use-cases), and [GenePattern](https://www.genepattern.org/) offer lots of training material and resources about how to actually perform analyses, especially for genomic analyses. Galaxy also supports other types of data, as do many other platforms, as described in the last chapter. Having the extra guidance like that offered with these types of platforms can be very beneficial to investigators that are trying out new methods!
+Cloud computing platforms such as [Galaxy](https://galaxyproject.org/), [AnVIL](https://support.terra.bio/hc/en-us/articles/360029772212-Controlling-Cloud-costs-sample-use-cases) and [GenePattern](https://www.genepattern.org/) offer lots of training material and resources about how to actually perform analyses, especially genomic analyses. Galaxy also supports other types of data, as do many other platforms, as described in the last chapter. Having the extra guidance like that offered with these types of platforms can be very beneficial to investigators that are trying out new methods!
 
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Some remote shared resources offer extra support for your informatics work. This can be important to pay attention to when considering which remote shared option to choose. Galaxy, AnVIL, GenePattern, and XNAT are examples of platforms that provide extra guidance.", out.width="100%"}
@@ -128,7 +128,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 ### Data access
 
-Some cloud computing options already have data available that may be of interest for you and your work. For example [Galaxy](https://galaxyproject.org/), [AnVIL](https://support.terra.bio/hc/en-us/articles/360029772212-Controlling-Cloud-costs-sample-use-cases), and [GenePattern](https://www.genepattern.org/) provide access to many genomic datasets. Smaller platforms can also have access to data that may be of specific more clinical interest to you as well such as the [Cancer Genome Collaboratory](https://cancercollaboratory.org/), which provides access to data from the [International Cancer Genome Consortium (ICGC)](https://en.wikipedia.org/wiki/International_Cancer_Genome_Consortium). 
+Some cloud computing options already have data available that may be of interest for you and your work. For example [Galaxy](https://galaxyproject.org/), [AnVIL](https://support.terra.bio/hc/en-us/articles/360029772212-Controlling-Cloud-costs-sample-use-cases), and [GenePattern](https://www.genepattern.org/) provide access to many genomic datasets. Smaller platforms can also have access to data that may be of specific clinical interest to you, such as the [Cancer Genome Collaboratory](https://cancercollaboratory.org/), which provides access to data from the [International Cancer Genome Consortium (ICGC)](https://en.wikipedia.org/wiki/International_Cancer_Genome_Consortium). 
 
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "If access to large controlled datasets be helpful for your work, consider cloud options that allow for this. Galaxy, AnVIL, PRISM, Terra, and GenePattern are options that provide access to controlled large datasets.", out.width="100%"}
@@ -155,7 +155,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 ```
 
 
-Recall, that a [command line interface](https://searchwindowsserver.techtarget.com/definition/command-line-interface-CLI) (also known as a character interface) allows users to specify functions with code. 
+Recall that a [command line interface](https://searchwindowsserver.techtarget.com/definition/command-line-interface-CLI) (also known as a character interface) allows users to specify functions with code. 
 
 For example, one could perform functions in R using Bioconductor packages such as [Biostrings](https://bioconductor.org/packages/release/bioc/html/Biostrings.html) with a command line interface:
 
@@ -170,7 +170,7 @@ A situation where you might use **both** a command line interface and a GUI, is
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.g115e0d5ae79_0_30")
 ```
 
-Some cloud computing options will have either interface option while others will only have one. This is an important consideration when you decide what computing resources to use.
+Some cloud computing options will have both interface options, while others will only have one. This is an important consideration when you decide what computing resources to use.
 
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Make sure you consider the type of interface you would like to work with in deciding about shared remote options. Some support only a GUI, some only command line, others support both. Galaxy and CyVerse support both command line and GUI interfaces, while OHIF and GenePattern have only a GUI based interface.", out.width="100%"}
@@ -190,21 +190,21 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 1) **Are local shared computing resources sufficient?**
 
-When a local computing resource solution already works, one may rightly question the time required to learning how to use a new cloud-based platform. However, when local solutions are insufficient or unsustainable (say other users often use up most of the resources or a server is often down), then remote options may be worth considering.
+When a local computing resource solution already works, one may rightly question the time required to learning how to use a new cloud-based platform. In that case, it might be more efficient to keep using the local resource. However, when local solutions are insufficient or unsustainable (say other users often use up most of the resources or a server is often down), then remote options may be worth considering.
 
 2) **Do you want to work with especially big or controlled access datasets?**
 
-Increasingly large datasets like the [Genotype -Tissue Expression (GTEx)](https://gtexportal.org/home/), [Therapeutically Applicable Research to Generate Effective Treatments (TARGET)](https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000218.v24.p8) or [The Cancer Genome Atlas (TCGA)](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) are being stored on the cloud-based platforms. If your work relies on being able to access a large dataset like this, then cloud computing resources may be your only practical option. 
+Increasingly, large datasets like the [Genotype -Tissue Expression (GTEx)](https://gtexportal.org/home/), [Therapeutically Applicable Research to Generate Effective Treatments (TARGET)](https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000218.v24.p8) or [The Cancer Genome Atlas (TCGA)](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) are being stored on the cloud-based platforms. If your work relies on being able to access a large dataset like this, then cloud computing resources may be your only practical option. 
 
 3) **Do you need to work with collaborators, especially those outside of your institute?**
 
-Computational research increasingly involves larger and larger collaborations. While many systems exist to share work and more traditional remote shared resources can work, cloud platforms make it easier for everyone to share the exact same computational environment including hardware, software, and datasets.
+Computational research increasingly involves larger and larger collaborations. While many systems exist to share work, and more traditional remote shared resources can work in these settings, cloud platforms make it easier for everyone to share the exact same computational environment including hardware, software, and datasets.
 
 We will now discuss several opportunities and challenges that cloud computing currently presents.
 
 ### Benefits of Cloud Computing
 
-The state of Cloud computing is continually evolving.  Here, we highlight some of the main current benefits:
+The state of Cloud computing is continually evolving. Here, we highlight some of the main current benefits:
 
 
 1) **Sharing history**  
@@ -222,34 +222,34 @@ By sharing such a history, one can reproduce an analysis in its entirety (if the
 
 2) **Sharing Workflows between Platforms**
 
-While sharing complete analysis histories is for the most part constrained to a particular software platform, a second benefit that has arisen is the ability to share workflows between platforms.[Dockstore](https://dockstore.org/) is a great open-source and free option to share and find bioinformatics workflows that can be launched using different platforms.
+While sharing complete analysis histories is for the most part constrained to a particular software platform, a second benefit is the ability to share workflows between platforms.[Dockstore](https://dockstore.org/) is a great open-source and free option to share and find bioinformatics workflows that can be launched using different platforms.
 
-Shown here is a diagram of an analysis pipeline to create a [custom reference](https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/advanced/references) for single cell 10x data using [cell ranger](https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/what-is-cell-ranger) published by the Klarman Cell Observatory on [Dockstore](https://dockstore.org/workflows/github.com/klarman-cell-observatory/cumulus/Cellranger_create_reference:master?tab=dag). Users can launch the workflow on various supported platforms such as Terra or AnVIL.:
+Shown here is a diagram of an analysis pipeline to create a [custom reference](https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/advanced/references) for single cell 10x data using [cell ranger](https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/what-is-cell-ranger) published by the Klarman Cell Observatory on [Dockstore](https://dockstore.org/workflows/github.com/klarman-cell-observatory/cumulus/Cellranger_create_reference:master?tab=dag). Users can launch the workflow on various supported platforms such as Terra or AnVIL:
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Workflow on Dockstore showing launch buttons with multiple platforms", out.width="100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.g117032ee319_0_8")
 ```
 
-This higher level abstraction coupled with container technology allows this multistep analysis to be run with relative ease on supporting platforms like Terra, AnVIL, or [DNAnexus](https://www.dnanexus.com/), which is yet another computing platform company, although with generally more costs associated as compared to Terra or AnVIL.
+The higher level abstraction coupled with container technology allows this multistep analysis to be run with relative ease on supporting platforms like Terra, AnVIL, or [DNAnexus](https://www.dnanexus.com/). DNAnexus is yet another computing platform company, although with generally more costs associated as compared to Terra or AnVIL.
 
 
 3) **Using Commodity Hardware**
 
-The third Benefit we highlight is the increasing ease by which one can provision commodity hardware at scale.  
+The third benefit we highlight is the increasing ease by which one can provision commodity hardware at scale.  
 
-What this means is that you can pay reasonable costs to complete your analysis in less time by renting hundreds to tens of thousands of Cloud-based computers -- importantly stopping the bill when your analysis is complete.  Specialized hardware like GPUs and large memory nodes are also available for rent allowing you to pay only for what you need. This could be difficult to do with a local server, which might require a great deal of time to increase the storage and computing capacity of the server, and it might be cost prohibitive. 
+What this means is that you can pay reasonable costs to complete your analysis in less time by renting hundreds to tens of thousands of Cloud-based computers -- importantly stopping the bill when your analysis is complete. Specialized hardware like GPUs and large memory nodes are also available for rent allowing you to pay only for what you need. This could be difficult to do with a local server, which might require a great deal of time to increase the storage and computing capacity of the server, and it might be cost-prohibitive. 
 
 4) **Less Etiquette**
 
-Relative to more traditional shared computing resources, you don't need to worry as much about sharing etiquette with cloud computing options because these resources typically have more than enough to go around. There is a trade off in that you will need to learn how to work with the cloud computing platform, however you can be more independent about what software you use and how many resources you use without bothering others.
+Relative to more traditional shared computing resources, you don't need to worry as much about sharing etiquette with cloud computing options because these resources typically have more than enough to go around. There is a trade-off, in that you will need to learn how to work with the cloud computing platform; however, you can be more independent about what software you use and how many resources you use without bothering others.
 
 5)  **GUI Interface**
 
-Although you can work with software that has a GUI interface on a more traditional local or remote shared resource, having a GUI directly built into the computing environment is often not available, while many cloud computing options provide GUI systems, which can be helpful for users who are less familiar with writing code for the command line. 
+Although you can work with software that has a GUI interface on a more traditional local or remote shared resource, having a GUI directly built into the computing environment is often not available. However, many cloud computing options provide GUI systems, which can be helpful for users who are less familiar with writing code for the command line. 
 
 6) **General Security**
 
-Often cloud computing options, depending on the size of the resource, will have a team of people working on maintaining the security of the resource. Thus often these resources have more manpower to commit to security than some of the smaller local shared resources. That being said, it depends on the resource, it is a good idea to look into the security measures of resources that you are considering to use. Furthermore, this does not necessarily mean that the security meets the requirements for certain data privacy protections. See the next section for more on this.  
+Often cloud computing options, depending on the size of the resource, will have a team of people working on maintaining the security of the resource. Thus, these resources often have more manpower to commit to security than some of the smaller local shared resources. That being said, it depends on the resource, and it is a good idea to look into the security measures of resources that you are considering to use. Furthermore, this does not necessarily mean that the security meets the requirements for certain data privacy protections. See the next section for more on this.  
 
 ### Challenges of Cloud Computing
 
@@ -257,13 +257,13 @@ Balancing these benefits are the following challenges:
 
 1) **Data Transfer**  
 
-Data transfer and data management remains a cumbersome task.  While storing data in the cloud has its advantages, it also has corresponding storage costs. Thus, careful planning is necessary with regards to what data will be stored where, as well as budgeting the time necessary to transfer data back and forth. However if you have yet to start work on a local shared computing resource than the work to transfer may be comparable. Additionally if you plan to use public data that is accessible through a platform designed for research than you may in fact have less data transfer needs when working with a cloud computing option. 
+Data transfer and data management remains a cumbersome task.  While storing data in the cloud has its advantages, it also has corresponding storage costs. Thus, careful planning is necessary with regards to what data will be stored where, as well as budgeting the time necessary to transfer data back and forth. However if you have yet to start work on a local shared computing resource than the work to transfer may be comparable. Additionally, if you plan to use public data that is accessible through a platform designed for research, then you may in fact have less data transfer needs when working with a cloud computing option. 
 
 2) **Data Privacy**  
 
 Most cloud resources offer features that make it easier to access and share data, and these features often come at the **expense of data privacy**. Thus, special precautions must be implemented to securely store protected datasets such as human genome sequences and electronic health records. 
 
-Some of the specialized research platforms allow for this as described in the previous chapter. Make sure you check what is required to set up an extra privacy protection - often this will not be automatic. Also keep in mind that although local shared resources like that of your university often have good security and data privacy policies and protection mechanisms, this is not always the case. It is worth investigating the methods  the each resource uses that you are interested in. 
+Some of the specialized research platforms allow for this as described in the previous chapter. Make sure you check what is required to set up an extra privacy protection - often this will not be automatic. Also keep in mind that although local shared resources like that of your university often have good security and data privacy policies and protection mechanisms, this is not always the case. It is worth investigating the methods each resource uses specifically. 
 
 <div class = "notice">
 
@@ -277,19 +277,19 @@ Controlling costs, especially with regards to storage and computing, presents a
 
 <div class = "notice">
 
-To avoid costly accidents make sure you are aware of the billing for the resources you are using and you inform students and other lab members.
+To avoid costly accidents, make sure you are aware of the billing for the resources you are using and you inform students and other lab members.
 
 </div>
 
 
 4) **IT**  
 
-A final challenge is that many IT support staff do not have extensive experience managing cloud resources.  Should IT choose to support analysis on the cloud, they would face the aforementioned challenges of understanding and supporting data management, security compliance, and cost management.  Fortunately, large initiatives like AnVIL, [Galaxy](https://usegalaxy.org/), and CyVerse continue to work on democratizing access to cloud computing by tackling many of these challenges.  
+A final challenge is that many IT support staff do not have extensive experience managing cloud resources. Should IT choose to support analysis on the cloud, they would face the aforementioned challenges of understanding and supporting data management, security compliance, and cost management. Fortunately, large initiatives like AnVIL, [Galaxy](https://usegalaxy.org/), and CyVerse continue to work on democratizing access to cloud computing by tackling many of these challenges.  
 
 
 ## Choosing between remote sharing options
 
-The final major decision, should you decide that you want to go with a remote sharing option is to decide which remote computing resource to go with. 
+Should you decide that you want to go with a remote sharing option, the final major decision is to decide which remote computing resource to go with. 
 
 ```{r, fig.align='center', echo = FALSE, fig.alt= "Choosing the right remote shared option", out.width="100%"}
 ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHEAbES1Agjy7Ex2IpVAoUIoBFbsq0/edit#slide=id.g117b5133acc_71_185")
@@ -310,7 +310,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1B4LwuvgA6aUopOHE
 
 ## Overall Decision Process
 
-We suggest evaluating your computing needs based on the following decision tree. The tree reads from left to right and you can click on the image to zoom. 
+We suggest evaluating your computing needs based on the following decision tree. The tree reads from left to right, and you can click on the image to zoom. 
 
 <div id="C660C4AD4516EA9D6EEABE9A9B7980244BE_91242"><div id="C660C4AD4516EA9D6EEABE9A9B7980244BE_91242_robot"><a href="https://cloud.smartdraw.com/share.aspx/?pubDocShare=C660C4AD4516EA9D6EEABE9A9B7980244BE" target="_blank"><img src="https://cloud.smartdraw.com/cloudstorage/C660C4AD4516EA9D6EEABE9A9B7980244BE/preview2.png"></a></div></div><script src="https://cloud.smartdraw.com/plugins/html/js/sdjswidget_html.js" type="text/javascript"></script><script type="text/javascript">SDJS_Widget("C660C4AD4516EA9D6EEABE9A9B7980244BE",91242,0,"");</script><br/>
 
@@ -323,7 +323,7 @@ In conclusion, here are some of the major take-home messages:
 1) The three major computing decisions are: 
  - Personal computer vs Shared resource
  - Local shared resource vs. Remote Shared resource
- -  Which shared resource. 
+ - Which shared resource. 
 
 Start first with determining if your personal computer can handle your work or if you have plans to collaborate with others at different institutes.
 
@@ -332,7 +332,7 @@ Start first with determining if your personal computer can handle your work or i
 - How computationally intensive will the work be? 
 - How much data storage is and will be needed?
 - Do I plan on collaborating with others outside my institute? 
--  Does my data that need extra privacy protection? 
+- Does my data that need extra privacy protection? 
 - How much money can I spend on computing? - Do I want extra guidance for my informatics work? 
 - Do I need flexibility? Might I work with more data modalities in the future?
 - Do I need scalability? Will I soon work with more data? 
@@ -343,7 +343,7 @@ Start first with determining if your personal computer can handle your work or i
 
 4) The main drawbacks of cloud options in general are: 
 
-- If you were already using a shared resource then migrating to the cloud will require data transfer effort and time
+- If you were already using a shared resource, then migrating to the cloud will require data transfer effort and time
 - Some cloud computing options do not provide data privacy protection that may be needed for certain types of data
 - Costs calculations can be especially confusing and you will need to learn how it works for the particular resource that you are interested in using
 - There will be less IT support form your local IT department as many of these resources have their own infrastructure, however many options provide their own guidance and support
diff --git a/resources/dictionary.txt b/resources/dictionary.txt
index 5ed2c9d..13b6612 100644
--- a/resources/dictionary.txt
+++ b/resources/dictionary.txt
@@ -103,6 +103,7 @@ mypone
 NCI
 NCSA
 NHGRI
+NICS
 NIS
 NLP
 n2c2
@@ -115,6 +116,7 @@ PKCS
 PoLP
 PostgreSQL
 proteomic
+PSC
 punchcards
 QuIP
 Raumann
@@ -127,6 +129,7 @@ scalable
 scalability
 SciServer
 SciUIs
+SDSC
 SkyServer
 semiconductive
 SGE