diff --git a/Administration/reverse-proxying.md b/Administration/reverse-proxying.md index bdeefb7af..0edcf09f8 100644 --- a/Administration/reverse-proxying.md +++ b/Administration/reverse-proxying.md @@ -27,11 +27,11 @@ You must have prior knowledge of - Linux console commands - DNS Records - Public IP addresses -- [Docker](https://docker.com) +- [Docker](https://www.docker.com) !!! -**You will have to buy a domain for yourself and configure a `CNAME` for your SillyTavern page. We suggest adding or buying the domain on [Cloudflare](https://cloudflare.com) as this guide will cover how to do this with Cloudflare itself.** +**You will have to buy a domain for yourself and configure a `CNAME` for your SillyTavern page. We suggest adding or buying the domain on [Cloudflare](https://www.cloudflare.com) as this guide will cover how to do this with Cloudflare itself.** ## Installation diff --git a/Installation/Docker.md b/Installation/Docker.md index 90c1702f2..c35e2b710 100644 --- a/Installation/Docker.md +++ b/Installation/Docker.md @@ -127,7 +127,7 @@ Using Docker on Windows is **_really_** complicated. Not only do you need to act It is highly suggested you install SillyTavern by following our [Windows](/Installation/Windows.md) guide. This section is a _rough_ idea of how it can be done on Windows. !!! -1. Install Docker Desktop by following the Docker installation guide [here](https://docs.docker.com/desktop/install/windows-install/). +1. Install Docker Desktop by following the Docker installation guide [here](https://docs.docker.com/desktop/setup/install/windows-install/). 2. Install [Git for Windows](https://git-scm.com/download/win). 3. Clone the SillyTavern repository. @@ -214,7 +214,7 @@ Even though macOS is similar to Linux, it doesn't have the Docker Engine. You wi You will also need to install [Homebrew](https://brew.sh/) in order to install Git on your Mac. This section is a _rough_ idea on how it can be done on macOS. !!! -1. Install Docker Desktop by following the Docker installation guide [here](https://docs.docker.com/desktop/install/mac-install/). +1. Install Docker Desktop by following the Docker installation guide [here](https://docs.docker.com/desktop/setup/install/mac-install/). 2. Install `git` using Homebrew. ```sh diff --git a/Usage/API_Connections/Connection-Profiles.md b/Usage/API_Connections/Connection-Profiles.md index 56da683fe..ddaca7518 100644 --- a/Usage/API_Connections/Connection-Profiles.md +++ b/Usage/API_Connections/Connection-Profiles.md @@ -24,7 +24,7 @@ Connection Profiles store the following selections. ### Text Completion APIs * [System Prompt and its state](/Usage/Prompts/advancedformatting.md#system-prompt) -* [Instruct Mode state and template](//Usage/Core_Concepts/instructmode.md) +* [Instruct Mode state and template](/Usage/Prompts/instructmode.md) * [Context Template](/Usage/Prompts/advancedformatting.md#context-template) * [Tokenizer](/Usage/Prompts/advancedformatting.md#tokenizer) diff --git a/Usage/API_Connections/OpenRouter.md b/Usage/API_Connections/OpenRouter.md index 58172f21e..1dd749e85 100644 --- a/Usage/API_Connections/OpenRouter.md +++ b/Usage/API_Connections/OpenRouter.md @@ -9,7 +9,7 @@ OpenRouter works by letting you use keys they own to access models like GPT-4 an It has a free trial (about $1) and paid access afterward. No subscription or monthly bill - you pay for what you actually use. Some models have free access with a limited context size. -- [OpenRouter Pricing Details](https://openrouter.ai/docs) +- [OpenRouter Pricing Details](https://openrouter.ai/models?o=pricing-high-to-low) - Create an OpenRouter account: [openrouter.ai](https://openrouter.ai/) ![OpenRouter-ConnectionPanel](/static/openrouter-connection.png) diff --git a/Usage/API_Connections/Scale.md b/Usage/API_Connections/Scale.md index 2a51b8a25..c61436889 100644 --- a/Usage/API_Connections/Scale.md +++ b/Usage/API_Connections/Scale.md @@ -30,4 +30,4 @@ Currently, Scale doesn't support token streaming and configuring parameters like ## Credits -Implementation and documentation are inspired by the work of [khanon](https://github.com/khanonners): +Implementation and documentation are inspired by the work of khanon on TavernAIScale. diff --git a/Usage/API_Connections/mancer.md b/Usage/API_Connections/mancer.md index 28dba394e..fd2f1b85f 100644 --- a/Usage/API_Connections/mancer.md +++ b/Usage/API_Connections/mancer.md @@ -1,8 +1,8 @@ # Mancer -Mancer is a large language model inferencing service that lets you run whatever prompts you want and doesn't censor responses. Most of the models require a preloaded balance to start chatting, but there is a free model as of writing (2/27/2024). +Mancer is a large language model inferencing service that lets you run whatever prompts you want and doesn't censor responses. Most of the models require a preloaded balance to start chatting, but there is a free model as of writing (2024-11-28). -- [Models](https://mancer.tech/models.html) -- [Pricing](https://mancer.tech/pricing.html) +- [Models](https://mancer.tech/models) +- [Pricing](https://mancer.tech/pricing) ### How to Get Started 1. Sign up for an account at [mancer.tech](https://mancer.tech/). diff --git a/Usage/API_Connections/openai.md b/Usage/API_Connections/openai.md index f9e9bc9c1..907bec092 100644 --- a/Usage/API_Connections/openai.md +++ b/Usage/API_Connections/openai.md @@ -55,7 +55,7 @@ It is possible to configure a proxy/alternative endpoint for OpenAI's backend. T Examples of backends which implement this API are: * [LM Studio](https://lmstudio.ai/) -* [LiteLLM](https://litellm.ai/) +* [LiteLLM](https://www.litellm.ai/) * [LocalAI](https://localai.io/) This feature is accessed by: diff --git a/Usage/API_Connections/tabbyapi.md b/Usage/API_Connections/tabbyapi.md index a17a5ded6..7a55f973d 100644 --- a/Usage/API_Connections/tabbyapi.md +++ b/Usage/API_Connections/tabbyapi.md @@ -4,8 +4,8 @@ A FastAPI based application that allows for generating text using an LLM using t * [GitHub](https://github.com/theroyallab/tabbyAPI) ### Quickstart -1. Follow the [installation instructions](https://github.com/theroyallab/tabbyAPI/wiki/1.-Getting-Started) on the official TabbyAPI GitHub. -2. [Create your config.yml](https://github.com/theroyallab/tabbyAPI/wiki/2.-Configuration) to set your model path, default model, sequence length, etc. You can ignore most (if not all) of these settings if you want. +1. Follow the [installation instructions](https://github.com/theroyallab/tabbyAPI/wiki/01.-Getting-Started) on the official TabbyAPI GitHub. +2. [Create your config.yml](https://github.com/theroyallab/tabbyAPI/wiki/02.-Server-options) to set your model path, default model, sequence length, etc. You can ignore most (if not all) of these settings if you want. 3. Launch TabbyAPI. If it worked, you should see something like this: ![TabbyAPI terminal](/static/tabby-terminal.png) diff --git a/Usage/Prompts/CFG.md b/Usage/Prompts/CFG.md index 322063031..12b70d73c 100644 --- a/Usage/Prompts/CFG.md +++ b/Usage/Prompts/CFG.md @@ -14,7 +14,8 @@ CFG, or classifier-free guidance is a method that's used to help make parts of a ### Supported Backend APIs -Currently, the supported backends are oobabooga's textgen WebUI, NovelAI, and TabbyAPI. NovelAI has its own documentation for CFG that you can read [here](https://docs.novelai.net/text/cfg.html) +Currently, the supported backends are oobabooga's textgen WebUI, NovelAI, and TabbyAPI. +NovelAI had its own [documentation for CFG](https://web.archive.org/web/20240917150051/https://docs.novelai.net/text/cfg.html). WARNING: CFG increases vram usage due to ingesting more than 1 prompt! If your GPU memory runs out while generating a prompt with CFG on, consider reducing your context size, using a lesser parameter model, or turning off CFG entirely. diff --git a/Usage/Prompts/advancedformatting.md b/Usage/Prompts/advancedformatting.md index 47af7eb54..7ea804b13 100644 --- a/Usage/Prompts/advancedformatting.md +++ b/Usage/Prompts/advancedformatting.md @@ -1,5 +1,6 @@ --- order: prompts-10 +route: /usage/core-concepts/advancedformatting/ --- # Advanced Formatting diff --git a/Usage/Prompts/instructmode.md b/Usage/Prompts/instructmode.md index 8cf4a53e8..98e4c4d3e 100644 --- a/Usage/Prompts/instructmode.md +++ b/Usage/Prompts/instructmode.md @@ -1,5 +1,6 @@ --- order: prompts-30 +route: /usage/core-concepts/instructmode/ --- # Instruct Mode diff --git a/Usage/User_Settings/uicustomization.md b/Usage/User_Settings/uicustomization.md index 2287ee27b..e7a63191e 100644 --- a/Usage/User_Settings/uicustomization.md +++ b/Usage/User_Settings/uicustomization.md @@ -1,5 +1,6 @@ --- order: user-settings-10 +route: /usage/core-concepts/uicustomization/ --- # UI Customization diff --git a/Usage/worldinfo.md b/Usage/worldinfo.md index 3f4b54c65..24a3cc99c 100644 --- a/Usage/worldinfo.md +++ b/Usage/worldinfo.md @@ -1,6 +1,7 @@ --- order: 130 icon: globe +route: /usage/core-concepts/worldinfo/ --- # World Info diff --git a/extensions/Extras/Installation.md b/extensions/Extras/Installation.md index 8a668459e..d87ac6414 100644 --- a/extensions/Extras/Installation.md +++ b/extensions/Extras/Installation.md @@ -31,7 +31,7 @@ This method is recommended because Conda makes a 'virtual environment' for the E 1. Install [Miniconda](https://docs.conda.io/en/latest/miniconda.html) - _(Important!) Read [how to use Conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html)_ + _(Important!) Read [how to use Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html)_ 2. Install [git](https://git-scm.com/downloads) diff --git a/extensions/Stable-Diffusion.md b/extensions/Stable-Diffusion.md index 6fb3c8cdb..04c659d7c 100644 --- a/extensions/Stable-Diffusion.md +++ b/extensions/Stable-Diffusion.md @@ -42,7 +42,7 @@ Most common Stable Diffusion generation settings are customizable within the Sil | [Stability AI](https://platform.stability.ai/) | Cloud, paid | | [Stable Diffusion WebUI / AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) | Local, open source (AGPL3), free of charge | | [Stable Horde](https://stablehorde.net/) | Cloud, open source (AGPL3), free of charge | -| [TogetherAI](https://api.together.xyz/models) | Cloud | +| [TogetherAI](https://docs.together.ai/docs/serverless-models#image-models) | Cloud | ## Generation modes diff --git a/extensions/WebSearch.md b/extensions/WebSearch.md index b567c8c8f..d3557038a 100644 --- a/extensions/WebSearch.md +++ b/extensions/WebSearch.md @@ -31,8 +31,7 @@ Requires a SearXNG instance URL (either private or public). Uses HTML format for Learn more: ### Tavily AI -Requires an API key. -Get the key here: +Requires an API key. Get the key here: :icon-lock: ## How to use