diff --git a/.github/workflows/jekyll-gh-pages.yml b/.github/workflows/jekyll-gh-pages.yml
index bc7b4b6..622ef9c 100644
--- a/.github/workflows/jekyll-gh-pages.yml
+++ b/.github/workflows/jekyll-gh-pages.yml
@@ -1,5 +1,5 @@
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
-name: Deploy Jekyll with GitHub Pages dependencies preinstalled
+name: Deploy Jekyll Docs site
on:
# Runs on pushes targeting the default branch
diff --git a/docs/00-index.md b/docs/00-index.md
index 2911dea..56fb13b 100644
--- a/docs/00-index.md
+++ b/docs/00-index.md
@@ -10,7 +10,7 @@ nav_order: 1
-
diff --git a/docs/03-reference-material/01-quickstart.md b/docs/03-reference-material/01-quickstart.md
index d60af4d..d0c2689 100644
--- a/docs/03-reference-material/01-quickstart.md
+++ b/docs/03-reference-material/01-quickstart.md
@@ -29,22 +29,22 @@ quickstart
1. First you will need to download a copy of the pipeline to a location where you can configure and execute it. Navigate to our GitHub repository and retrieve the latest tag information.
2. Next, you can use `GIT` to clone a copy of the pipeline to your working environment:
```bash
- git clone https://github.com/Tuks-ICMM/Pharmacogenetic-Analysis-Pipeline/releases/tag/{{TAG_VERSION_HERE}} .
+ git clone https://github.com/Tuks-ICMM/Population-Structure-Workflow/releases/tag/{{TAG_VERSION_HERE}} .
```
{: .normal }
- > Tags are available on our GitHub repository under the [releases](https://github.com/Tuks-ICMM/Pharmacogenetic-Analysis-Pipeline/releases) page.
+ > Tags are available on our GitHub repository under the [releases](https://github.com/Tuks-ICMM/Population-Structure-Workflow/releases) page.
## Prepare data and Metadata
1. In order to execute the _{{ site.title }}_, you will need to configure the pipeline as well as provide information about the analysis you wish to perform. This involves the following configuration files:
- - `config/config.json` ([General configuration](https://tuks-icmm.github.io/Pharmacogenetic-Analysis-Pipeline/overview/configuration#setting-global-configuration))
- - `input/datasets.csv` ([Dataset declarations](https://tuks-icmm.github.io/Pharmacogenetic-Analysis-Pipeline/overview/data-requirements#datasets--dataset-files))
- - `input/samples.csv` ([Sample metadata](https://tuks-icmm.github.io/Pharmacogenetic-Analysis-Pipeline/overview/data-requirements#samples))
- - `input/locations.csv` ([Genomic location metadata](https://tuks-icmm.github.io/Pharmacogenetic-Analysis-Pipeline/overview/data-requirements#genomic-locations))
- - `input/transcripts.csv` ([Transcript selection](https://tuks-icmm.github.io/Pharmacogenetic-Analysis-Pipeline/overview/data-requirements#samples))
+ - `config/config.json` ([General configuration](https://tuks-icmm.github.io/Population-Structure-Workflow/overview/configuration#setting-global-configuration))
+ - `input/datasets.csv` ([Dataset declarations](https://tuks-icmm.github.io/Population-Structure-Workflow/overview/data-requirements#datasets--dataset-files))
+ - `input/samples.csv` ([Sample metadata](https://tuks-icmm.github.io/Population-Structure-Workflow/overview/data-requirements#samples))
+ - `input/locations.csv` ([Genomic location metadata](https://tuks-icmm.github.io/Population-Structure-Workflow/overview/data-requirements#genomic-locations))
+ - `input/transcripts.csv` ([Transcript selection](https://tuks-icmm.github.io/Population-Structure-Workflow/overview/data-requirements#samples))
2. Following configuration, you will need to provide the input data files themselves.
- - `.vcf.gz` files can be compressed but must be accompanied by a tabix index file ([Discussion here](https://tuks-icmm.github.io/Pharmacogenetic-Analysis-Pipeline/overview/data-requirements#compression-and-indexing))
- - `.fasta.gz` files for reference sequences must be accompanied by a sequence dictionary file (`.dict`), a fasta index file (`.fa.gz.fai` or `fasta.gz.fai`) and a BGZIP-index (`.fa.gz.gzi`) ([Discussion here](https://tuks-icmm.github.io/Pharmacogenetic-Analysis-Pipeline/overview/configuration#reference-genomes)).
+ - `.vcf.gz` files can be compressed but must be accompanied by a tabix index file ([Discussion here](https://tuks-icmm.github.io/Population-Structure-Workflow/overview/data-requirements#compression-and-indexing))
+ - `.fasta.gz` files for reference sequences must be accompanied by a sequence dictionary file (`.dict`), a fasta index file (`.fa.gz.fai` or `fasta.gz.fai`) and a BGZIP-index (`.fa.gz.gzi`) ([Discussion here](https://tuks-icmm.github.io/Population-Structure-Workflow/overview/configuration#reference-genomes)).
## Execute analysis
1. To execute the analysis, we need to compile our metadata and auto-generate a suitable queue-able script for the batch scheduler. To do this, you can use the `run.py` script which generates and queues a hidden generated script `.run.sh` written for your environment. For example:
diff --git a/docs/03-reference-material/04-roadmap.md b/docs/03-reference-material/04-roadmap.md
index 88013b6..8165bb8 100644
--- a/docs/03-reference-material/04-roadmap.md
+++ b/docs/03-reference-material/04-roadmap.md
@@ -27,7 +27,7 @@ Changelog
---
# Roadmap
-See our [Issues tracker](https://github.com/Tuks-ICMM/Pharmacogenetic-Analysis-Pipeline/issues) on GitHub for a list of proposed features (and known issues).
+See our [Issues tracker](https://github.com/Tuks-ICMM/Population-Structure-Workflow/issues) on GitHub for a list of proposed features (and known issues).
- Q1-Q2 2023
diff --git a/docs/_includes/head_custom.html b/docs/_includes/head_custom.html
index b125952..8c2e817 100644
--- a/docs/_includes/head_custom.html
+++ b/docs/_includes/head_custom.html
@@ -3,7 +3,7 @@
content="A Snakemake powered pipeline developed to perform variant-effect-prediction, frequency analysis and Admixture analysis given multiple Variant Call Format datasets. This has been developed in partia...">
-
+
@@ -15,7 +15,7 @@
-
+