diff --git a/_redirects b/_redirects
index 26c7c70d..0fd61491 100644
--- a/_redirects
+++ b/_redirects
@@ -676,4 +676,10 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned/ /docs 302
 /cortex/architecture https://cortex.so/docs/architecture 301
 /cortex/cortex-cpp https://cortex.so/docs/cortex-cpp 301
 /cortex/cortex-llamacpp https://cortex.so/docs/cortex-llamacpp 301
-/api-reference https://cortex.so/api-reference 301
\ No newline at end of file
+/api-reference https://cortex.so/api-reference 301
+/docs/assistants /docs 302
+/docs/server-installation/ /docs/desktop 302
+/docs/server-installation/onprem /docs/desktop 302
+/docs/server-installation/aws /docs/desktop 302
+/docs/server-installation/gcp /docs/desktop 302
+/docs/server-installation/azure /docs/desktop 302
\ No newline at end of file
diff --git a/src/pages/cortex/installation/linux.mdx b/src/pages/cortex/installation/linux.mdx
index f0b08be6..2d49f271 100644
--- a/src/pages/cortex/installation/linux.mdx
+++ b/src/pages/cortex/installation/linux.mdx
@@ -68,7 +68,7 @@ Ensure that your system meets the following requirements to run Cortex:
 
     <Callout type="info">
     - Please check whether your Linux distribution supports desktop, server, or both environments.
-    - For server versions, please refer to the [server installation](https://jan.ai/docs/server-installation).
+    
     </Callout>
 </Tabs.Tab>
 <Tabs.Tab>
diff --git a/src/pages/docs/_meta.json b/src/pages/docs/_meta.json
index 6553964a..fa9a38b7 100644
--- a/src/pages/docs/_meta.json
+++ b/src/pages/docs/_meta.json
@@ -12,20 +12,12 @@
     "title": "Quickstart"
   },
   "desktop": "Desktop",
-  "server-installation": {
-    "display": "hidden",
-    "title": "Server Installation"
-  },
   "data-folder": "Jan Data Folder",
   "user-guides": {
     "title": "BASIC USAGE",
     "type": "separator"
   },
   "models": "Models",
-  "assistants": {
-    "display": "hidden",
-    "title": "Assistants"
-  },
   "tools": "Tools",
   "threads": "Threads",
   "settings": "Settings",
diff --git a/src/pages/docs/assistants.mdx b/src/pages/docs/assistants.mdx
deleted file mode 100644
index 83d0f73d..00000000
--- a/src/pages/docs/assistants.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: Assistants
-description: A step-by-step guide on customizing your assistant.
-keywords:
-  [
-    Jan,
-    Customizable Intelligence, LLM,
-    local AI,
-    privacy focus,
-    free and open source,
-    private and offline,
-    conversational AI,
-    no-subscription fee,
-    large language models,
-    manage assistants,
-    assistants,
-  ]
----
-
-import { Callout } from 'nextra/components' 
-
-
-# Assistants
-This guide explains how to customize the default Assistant settings and add a new assistant.
-
-## Customize the Assistant
-
-To change Jan's default settings, follow these steps:
-
-1. Click the three dots next to the **assistant** dropdown in any thread settings.
-2. Select **Edit global defaults**.
-3. Edit the `assistant.json` file based on your preferences. e.g., set a default prompt for `instructions`.
-4. Refresh the application. Your changes should persist for all future threads.
-<br/>
-![Customize Assistant](./_assets/assistant1.gif)
-
-### Rename the Assistant
-
-To rename the assistant, follow the steps below:
-
-1. Select a Thread.
-2. Click on the **three dots (⋮)** in the Thread section.
-3. Select the **Edit Threads Settings** to open the `threads.json` file configurations.
-4. Edit the `assistant_name` field under the `assistants` array for the desired assistant name.
-5. Save the file.
-6. Restart the Jan app.
-<br/>
-![Rename Assistant](./_assets/assistant2.gif)
\ No newline at end of file
diff --git a/src/pages/docs/desktop/linux.mdx b/src/pages/docs/desktop/linux.mdx
index 8f3aff08..64440ad0 100644
--- a/src/pages/docs/desktop/linux.mdx
+++ b/src/pages/docs/desktop/linux.mdx
@@ -53,7 +53,7 @@ Ensure that your system meets the following requirements to use Jan effectively:
 
     <Callout type="info">
     - Please check whether your Linux distribution supports desktop, server, or both environments.
-    - For server versions, please refer to the [server installation](https://jan.ai/docs/server-installation).
+    
     </Callout>
 </Tabs.Tab>
 <Tabs.Tab>
diff --git a/src/pages/docs/index.mdx b/src/pages/docs/index.mdx
index 055086ee..a36d3d49 100644
--- a/src/pages/docs/index.mdx
+++ b/src/pages/docs/index.mdx
@@ -25,7 +25,7 @@ import FAQBox from '@/components/FaqBox'
 ![Jan's Cover Image](./_assets/jan-display.png)
 
 
-Jan is a ChatGPT-alternative that runs 100% offline on your [Desktop](/docs/desktop-installation) (or [Server](/docs/server-installation)). Our goal is to make it easy for a layperson[^1] to download and run LLMs and use AI with full control and [privacy](https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/).
+Jan is a ChatGPT-alternative that runs 100% offline on your [Desktop](/docs/desktop-installation). Our goal is to make it easy for a layperson[^1] to download and run LLMs and use AI with full control and [privacy](https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/).
 
 Jan is powered by [Cortex](https://cortex.so/), our embeddable local AI engine. 
 
diff --git a/src/pages/docs/server-installation.mdx b/src/pages/docs/server-installation.mdx
deleted file mode 100644
index ed24b45e..00000000
--- a/src/pages/docs/server-installation.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Server Installation
-description: Jan is a ChatGPT-alternative that runs on your computer, with a local API server.
-keywords:
-  [
-    Jan,
-    Customizable Intelligence, LLM,
-    local AI,
-    privacy focus,
-    free and open source,
-    private and offline,
-    conversational AI,
-    no-subscription fee,
-    large language models,
-    Hardware Setup,
-    GPU,
-  ]
----
-
-import { Cards, Card } from 'nextra/components'
-import childPages from './server-installation/_meta.json';
-
-# Server Installation
-
-<br/>
-
-<Cards
-  children={Object.keys(childPages).map((key, i) => (
-    <Card
-      key={i}
-      title={childPages[key].title}
-      href={childPages[key].href}
-    />
-  ))}
-/>
\ No newline at end of file
diff --git a/src/pages/docs/server-installation/_assets/helm_resources.png b/src/pages/docs/server-installation/_assets/helm_resources.png
deleted file mode 100644
index 70edbbcf..00000000
Binary files a/src/pages/docs/server-installation/_assets/helm_resources.png and /dev/null differ
diff --git a/src/pages/docs/server-installation/_meta.json b/src/pages/docs/server-installation/_meta.json
deleted file mode 100644
index ca180400..00000000
--- a/src/pages/docs/server-installation/_meta.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
-  "onprem": {
-    "title": "On Premise",
-    "href": "/docs/server-installation/onprem"
-  },
-  "aws": {
-    "title": "AWS",
-    "href": "/docs/server-installation/aws"
-  },
-  "gcp": {
-    "title": "GCP",
-    "href": "/docs/server-installation/gcp"
-  },
-  "azure": {
-    "title": "Azure",
-    "href": "/docs/server-installation/azure"
-  }
-}
diff --git a/src/pages/docs/server-installation/aws.mdx b/src/pages/docs/server-installation/aws.mdx
deleted file mode 100644
index 56fd05b0..00000000
--- a/src/pages/docs/server-installation/aws.mdx
+++ /dev/null
@@ -1,182 +0,0 @@
----
-title: AWS
-description: A step-by-step guide on installing the Jan server with AWS.
-keywords:
-  [
-    Jan,
-    Customizable Intelligence, LLM,
-    local AI,
-    privacy focus,
-    free and open source,
-    private and offline,
-    conversational AI,
-    no-subscription fee,
-    large language models,
-    quickstart,
-    getting started,
-    using AI model,
-    installation,
-    "server",
-    "web"
-  ]
----
-
-import { Tabs, Callout, Steps } from 'nextra/components'
-
-# AWS Installation
-To install Jan Server, follow the steps below:
-<Steps>
-### Step 1: Prepare Environment
-1. Go to AWS console -> `EC2`.
-2. Choose an instance with at least `c5.2xlarge` for CPU only or `g5.2xlarge` for NVIDIA GPU support.
-3. Add EBS volume with at least **100GB**.
-4. Configure network security group rules to allow inbound traffic on port `1337`.
-### Step 2: Get Jan Server
-<Tabs items={['Docker', 'Kubernetes - Helm']}>
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-  - Windows 10 or higher is required to run Jan.
-  - WSL2 must run in Windows in Jan. Follow the instructions [here](https://learn.microsoft.com/en-us/windows/wsl/install) to install it.
-  - To enable GPU support, you will need:
-    - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-    - NVIDIA driver 470.63.01 or higher
-    - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-
-2. Install Docker Engine and Docker Compose on your AWS Instance using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>
-```bash
-curl -fsSL https://get.docker.com -o get-docker.sh
-sudo sh ./get-docker.sh --dry-run
-```
-3. Download Jan `docker-compose.yml` file onto your AWS Instance using the following command:
-```bash
-curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-```
-### Step 3: Docker Configuration
-Once you have installed Docker Engine and Docker Compose in your AWS Instance, you need to set up the docker profile and environment variables.
-
-The available Docker Compose profile and the environment variables are listed below:
-| Docker compose Profile | Description                                  |
-| ---------------------- | -------------------------------------------- |
-| `cpu-fs`               | Run Jan in CPU mode with the default file system |
-| `cpu-s3fs`             | Run Jan in CPU mode with S3 file system      |
-| `gpu-fs`               | Run Jan in GPU mode with the default file system |
-| `gpu-s3fs`             | Run Jan in GPU mode with S3 file system      |
-
-| Environment Variable    | Description                                                                                             |
-| ----------------------- | ------------------------------------------------------------------------------------------------------- |
-| `S3_BUCKET_NAME`        | S3 bucket name - leave blank for default file system                                                    |
-| `AWS_ACCESS_KEY_ID`     | AWS access key ID - leave blank for default file system                                                 |
-| `AWS_SECRET_ACCESS_KEY` | AWS secret access key - leave blank for default file system                                             |
-| `AWS_ENDPOINT`          | AWS endpoint URL - leave blank for default file system                                                  |
-| `AWS_REGION`            | AWS region - leave blank for default file system                                                        |
-| `API_BASE_URL`          | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
-
-### Step 4: Run Jan Server
-You can run the Jan server in two modes:
-- CPU
-- GPU
-#### Run Jan in CPU Mode
-Run Jan in CPU mode by using the following command:
-
-```bash
-# cpu mode with default file system
-docker compose --profile cpu-fs up -d
-
-# cpu mode with S3 file system
-docker compose --profile cpu-s3fs up -d
-```
-
-#### Run Jan in GPU mode
-
-1. Check CUDA compatibility with your NVIDIA driver by running `nvidia-smi` and check the CUDA version in the output:
-
-```bash
-nvidia-smi
-
-# Output
-+---------------------------------------------------------------------------------------+
-| NVIDIA-SMI 531.18                 Driver Version: 531.18       CUDA Version: 12.1     |
-|-----------------------------------------+----------------------+----------------------+
-| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
-| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
-|                                         |                      |               MIG M. |
-|=========================================+======================+======================|
-|   0  NVIDIA GeForce RTX 4070 Ti    WDDM | 00000000:01:00.0  On |                  N/A |
-|  0%   44C    P8               16W / 285W|   1481MiB / 12282MiB |      2%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   1  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:02:00.0 Off |                  N/A |
-|  0%   49C    P8               14W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   2  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:05:00.0 Off |                  N/A |
-| 29%   38C    P8               11W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-
-+---------------------------------------------------------------------------------------+
-| Processes:                                                                            |
-|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
-|        ID   ID                                                             Usage      |
-|=======================================================================================|
-```
-
-2.  Visit [NVIDIA NGC Catalog ](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of the image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
-
-3. Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g., change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
-
-4. Run the following command to start Jan in GPU mode:
-
-```bash
-# GPU mode with default file system
-docker compose --profile gpu-fs up -d
-
-# GPU mode with S3 file system
-docker compose --profile gpu-s3fs up -d
-```
-### Step 5: Access the Jan Server
-Once the Jan server is running on your AWS Instance, you can access it using your Instance's public IP address or domain name.
-1. Open a web browser and navigate to the Jan Server URL, typically `http://<INSTANCE_public_IP>:3000` or `http://<domain_name>:3000`.
-  </Tabs.Tab>
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-  - Windows 10 or higher is required to run Jan.
-  - WSL2 must run in Windows in Jan. Follow the instructions [here](https://learn.microsoft.com/en-us/windows/wsl/install) to install it.
-  - To enable GPU support, you will need:
-    - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-    - NVIDIA driver 470.63.01 or higher
-    - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-    - [NVIDIA Device Plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin)
-2. Install Docker Engine and Docker Compose on your AWS Instance using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>
-```bash
-curl -fsSL https://get.docker.com -o get-docker.sh
-sudo sh ./get-docker.sh --dry-run
-```
-3. Download Jan `docker-compose.yml` file onto your AWS Instance using the following command:
-```bash
-curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-```
-### Step 3: Helm Installation
-1. Get Helm chart from Jan repository by using the following command:
-    ```bash
-      git clone https://github.com/janhq/jan.git
-      cd jan/charts/server/
-      helm install jan-server .
-    ```
-2. Verify and modify the configuration options by accessing the `values.yaml` file on `/jan/charts/server`. The following is the example resource created by Jan helm chart:
-    ![Jan server helm argo](./_assets/helm_resources.png)
-### Step 4: Access the Jan Server
-Once the Jan server runs on your Helm server, you can access it using your Instance's public IP address or domain name.
-1. Open a web browser and navigate to the Jan Server URL at `http://jan-server-service-web:1337`.
-  </Tabs.Tab>
-</Tabs>
-<Callout type="info"> 
-**RAG** feature is not yet supported in Docker mode with `s3fs`.
-</Callout>
-</Steps>
\ No newline at end of file
diff --git a/src/pages/docs/server-installation/azure.mdx b/src/pages/docs/server-installation/azure.mdx
deleted file mode 100644
index 0fa56061..00000000
--- a/src/pages/docs/server-installation/azure.mdx
+++ /dev/null
@@ -1,182 +0,0 @@
----
-title: Azure
-description: A step-by-step guide on installing the Jan server with Azure.
-keywords:
-  [
-    Jan,
-    Customizable Intelligence, LLM,
-    local AI,
-    privacy focus,
-    free and open source,
-    private and offline,
-    conversational AI,
-    no-subscription fee,
-    large language models,
-    quickstart,
-    getting started,
-    using AI model,
-    installation,
-    "server",
-    "web"
-  ]
----
-
-import { Tabs, Callout, Steps } from 'nextra/components'
-
-# Azure Installation
-To install Jan Server, follow the steps below:
-<Steps>
-### Step 1: Prepare Environment
-1. Go to Azure console -> `Service` -> `Virtual machines`.
-2. Choose an instance with at least `Standard_F8s_v2` for CPU only or `Standard_NC4as_T4_v3` for NVIDIA GPU support.
-3. Add an Azure Disk or Azure Blob Storage volume with at least **100GB**.
-4. Configure network security group rules to allow inbound traffic on port `1337`.
-### Step 2: Get Jan Server
-<Tabs items={['Docker', 'Kubernetes - Helm']}>
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-  - Windows 10 or higher is required to run Jan.
-  - WSL2 must run in Windows in Jan. Follow the instructions [here](https://learn.microsoft.com/en-us/windows/wsl/install) to install it.
-  - To enable GPU support, you will need:
-    - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-    - NVIDIA driver 470.63.01 or higher
-    - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-
-2. Install Docker Engine and Docker Compose on your Azure VM using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>
-```bash
-curl -fsSL https://get.docker.com -o get-docker.sh
-sudo sh ./get-docker.sh --dry-run
-```
-3. Download Jan `docker-compose.yml` file onto your Azure VM using the following command:
-```bash
-curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-```
-### Step 3: Docker Configuration
-Once you have installed Docker Engine and Docker Compose in your Azure VM, you need to set up the docker profile and environment variables.
-
-The available Docker Compose profile and the environment variables are listed below:
-| Docker compose Profile | Description                                  |
-| ---------------------- | -------------------------------------------- |
-| `cpu-fs`               | Run Jan in CPU mode with the default file system |
-| `cpu-s3fs`             | Run Jan in CPU mode with S3 file system      |
-| `gpu-fs`               | Run Jan in GPU mode with the default file system |
-| `gpu-s3fs`             | Run Jan in GPU mode with S3 file system      |
-
-| Environment Variable    | Description                                                                                             |
-| ----------------------- | ------------------------------------------------------------------------------------------------------- |
-| `S3_BUCKET_NAME`        | S3 bucket name - leave blank for default file system                                                    |
-| `AZURE_ACCESS_KEY_ID`     | AZURE access key ID - leave blank for default file system                                                 |
-| `AZURE_SECRET_ACCESS_KEY` | AZURE secret access key - leave blank for default file system                                             |
-| `AZURE_ENDPOINT`          | AZURE endpoint URL - leave blank for default file system                                                  |
-| `AZURE_REGION`            | AZURE region - leave blank for default file system                                                        |
-| `API_BASE_URL`          | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
-
-### Step 4: Run Jan Server
-You can run the Jan server in two modes:
-- CPU
-- GPU
-#### Run Jan in CPU Mode
-Run Jan in CPU mode by using the following command:
-
-```bash
-# cpu mode with default file system
-docker compose --profile cpu-fs up -d
-
-# cpu mode with S3 file system
-docker compose --profile cpu-s3fs up -d
-```
-
-#### Run Jan in GPU mode
-
-1. Check CUDA compatibility with your NVIDIA driver by running `nvidia-smi` and check the CUDA version in the output:
-
-```bash
-nvidia-smi
-
-# Output
-+---------------------------------------------------------------------------------------+
-| NVIDIA-SMI 531.18                 Driver Version: 531.18       CUDA Version: 12.1     |
-|-----------------------------------------+----------------------+----------------------+
-| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
-| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
-|                                         |                      |               MIG M. |
-|=========================================+======================+======================|
-|   0  NVIDIA GeForce RTX 4070 Ti    WDDM | 00000000:01:00.0  On |                  N/A |
-|  0%   44C    P8               16W / 285W|   1481MiB / 12282MiB |      2%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   1  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:02:00.0 Off |                  N/A |
-|  0%   49C    P8               14W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   2  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:05:00.0 Off |                  N/A |
-| 29%   38C    P8               11W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-
-+---------------------------------------------------------------------------------------+
-| Processes:                                                                            |
-|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
-|        ID   ID                                                             Usage      |
-|=======================================================================================|
-```
-
-2.  Visit [NVIDIA NGC Catalog ](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of the image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
-
-3. Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g., change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
-
-4. Run the following command to start Jan in GPU mode:
-
-```bash
-# GPU mode with default file system
-docker compose --profile gpu-fs up -d
-
-# GPU mode with S3 file system
-docker compose --profile gpu-s3fs up -d
-```
-### Step 5: Access the Jan Server
-Once the Jan server is running on your Azure VM, you can access it using the public IP address or domain name of your VM.
-1. Open a web browser and navigate to the Jan Server URL, typically `http://<VM_public_IP>:3000` or `http://<domain_name>:3000`.
-</Tabs.Tab>
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-  - Windows 10 or higher is required to run Jan.
-  - WSL2 must run in Windows in Jan. Follow the instructions [here](https://learn.microsoft.com/en-us/windows/wsl/install) to install it.
-  - To enable GPU support, you will need:
-    - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-    - NVIDIA driver 470.63.01 or higher
-    - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-    - [NVIDIA Device Plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin)
-2. Install Docker Engine and Docker Compose on your Azure VM using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>
-```bash
-curl -fsSL https://get.docker.com -o get-docker.sh
-sudo sh ./get-docker.sh --dry-run
-```
-3. Download Jan `docker-compose.yml` file onto your Azure VM using the following command:
-```bash
-curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-```
-### Step 3: Helm Installation
-1. Get Helm chart from Jan repository by using the following command:
-    ```bash
-      git clone https://github.com/janhq/jan.git
-      cd jan/charts/server/
-      helm install jan-server .
-    ```
-2. Verify and modify the configuration options by accessing the `values.yaml` file on `/jan/charts/server`. The following is the example resource created by Jan helm chart:
-    ![Jan server helm argo](./_assets/helm_resources.png)
-### Step 4: Access the Jan Server
-Once the Jan server is running on your Helm server, you can access it using the public IP address or domain name of your VM.
-1. Open a web browser and navigate to the Jan Server URL at `http://jan-server-service-web:1337`.
-  </Tabs.Tab>
-</Tabs>
-<Callout type="info"> 
-**RAG** feature is not yet supported in Docker mode with `s3fs`.
-</Callout>
-</Steps>
\ No newline at end of file
diff --git a/src/pages/docs/server-installation/gcp.mdx b/src/pages/docs/server-installation/gcp.mdx
deleted file mode 100644
index 6a599966..00000000
--- a/src/pages/docs/server-installation/gcp.mdx
+++ /dev/null
@@ -1,182 +0,0 @@
----
-title: GCP
-description: A step-by-step guide on installing the Jan server with GCP.
-keywords:
-  [
-    Jan,
-    Customizable Intelligence, LLM,
-    local AI,
-    privacy focus,
-    free and open source,
-    private and offline,
-    conversational AI,
-    no-subscription fee,
-    large language models,
-    quickstart,
-    getting started,
-    using AI model,
-    installation,
-    "server",
-    "web"
-  ]
----
-
-import { Tabs, Callout, Steps } from 'nextra/components'
-
-# GCP Installation
-To install Jan Server, follow the steps below:
-<Steps>
-### Step 1: Prepare Environment
-1. Go to GCP console -> `Compute instance`.
-2. Choose an instance with at least `c2-standard-8` for CPU only or `g2-standard-4` for NVIDIA GPU support.
-3. Add a Persistent Disk volume with at least **100GB**.
-4. Configure network security group rules to allow inbound traffic on port `1337`.
-### Step 2: Get Jan Server
-<Tabs items={['Docker', 'Kubernetes - Helm']}>
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-  - Windows 10 or higher is required to run Jan.
-  - WSL2 must run in Windows in Jan. Follow the instructions [here](https://learn.microsoft.com/en-us/windows/wsl/install) to install it.
-  - To enable GPU support, you will need:
-    - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-    - NVIDIA driver 470.63.01 or higher
-    - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-
-2. Install Docker Engine and Docker Compose on your GCP Instance using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>
-```bash
-curl -fsSL https://get.docker.com -o get-docker.sh
-sudo sh ./get-docker.sh --dry-run
-```
-3. Download Jan `docker-compose.yml` file onto your GCP Instance using the following command:
-```bash
-curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-```
-### Step 3: Docker Configuration
-Once you have installed Docker Engine and Docker Compose in your GCP Instance, you need to set up the docker profile and environment variables.
-
-The available Docker Compose profile and the environment variables are listed below:
-| Docker compose Profile | Description                                  |
-| ---------------------- | -------------------------------------------- |
-| `cpu-fs`               | Run Jan in CPU mode with the default file system |
-| `cpu-s3fs`             | Run Jan in CPU mode with S3 file system      |
-| `gpu-fs`               | Run Jan in GPU mode with the default file system |
-| `gpu-s3fs`             | Run Jan in GPU mode with S3 file system      |
-
-| Environment Variable    | Description                                                                                             |
-| ----------------------- | ------------------------------------------------------------------------------------------------------- |
-| `S3_BUCKET_NAME`        | S3 bucket name - leave blank for default file system                                                    |
-| `GCP_ACCESS_KEY_ID`     | GCP access key ID - leave blank for default file system                                                 |
-| `GCP_SECRET_ACCESS_KEY` | GCP secret access key - leave blank for default file system                                             |
-| `GCP_ENDPOINT`          | GCP endpoint URL - leave blank for default file system                                                  |
-| `GCP_REGION`            | GCP region - leave blank for default file system                                                        |
-| `API_BASE_URL`          | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
-
-### Step 4: Run Jan Server
-You can run the Jan server in two modes:
-- CPU
-- GPU
-#### Run Jan in CPU Mode
-Run Jan in CPU mode by using the following command:
-
-```bash
-# cpu mode with default file system
-docker compose --profile cpu-fs up -d
-
-# cpu mode with S3 file system
-docker compose --profile cpu-s3fs up -d
-```
-
-#### Run Jan in GPU mode
-
-1. Check CUDA compatibility with your NVIDIA driver by running `nvidia-smi` and check the CUDA version in the output:
-
-```bash
-nvidia-smi
-
-# Output
-+---------------------------------------------------------------------------------------+
-| NVIDIA-SMI 531.18                 Driver Version: 531.18       CUDA Version: 12.1     |
-|-----------------------------------------+----------------------+----------------------+
-| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
-| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
-|                                         |                      |               MIG M. |
-|=========================================+======================+======================|
-|   0  NVIDIA GeForce RTX 4070 Ti    WDDM | 00000000:01:00.0  On |                  N/A |
-|  0%   44C    P8               16W / 285W|   1481MiB / 12282MiB |      2%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   1  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:02:00.0 Off |                  N/A |
-|  0%   49C    P8               14W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   2  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:05:00.0 Off |                  N/A |
-| 29%   38C    P8               11W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-
-+---------------------------------------------------------------------------------------+
-| Processes:                                                                            |
-|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
-|        ID   ID                                                             Usage      |
-|=======================================================================================|
-```
-
-2.  Visit [NVIDIA NGC Catalog ](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of the image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
-
-3. Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g. change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
-
-4. Run the following command to start Jan in GPU mode:
-
-```bash
-# GPU mode with default file system
-docker compose --profile gpu-fs up -d
-
-# GPU mode with S3 file system
-docker compose --profile gpu-s3fs up -d
-```
-### Step 5: Access the Jan Server
-Once the Jan server runs on your GCP Instance, you can access it using your Instance's public IP address or domain name.
-1. Open a web browser and navigate to the Jan Server URL, typically `http://<INSTANCE_public_IP>:3000` or `http://<domain_name>:3000`.
-</Tabs.Tab>
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-  - Windows 10 or higher is required to run Jan.
-  - WSL2 must run in Windows in Jan. Follow the instructions [here](https://learn.microsoft.com/en-us/windows/wsl/install) to install it.
-  - To enable GPU support, you will need:
-    - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-    - NVIDIA driver 470.63.01 or higher
-    - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-    - [NVIDIA Device Plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin)
-2. Install Docker Engine and Docker Compose on your GCP Instance using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>
-```bash
-curl -fsSL https://get.docker.com -o get-docker.sh
-sudo sh ./get-docker.sh --dry-run
-```
-3. Download Jan `docker-compose.yml` file onto your GCP Instance using the following command:
-```bash
-curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-```
-### Step 3: Helm Installation
-1. Get Helm chart from Jan repository by using the following command:
-    ```bash
-      git clone https://github.com/janhq/jan.git
-      cd jan/charts/server/
-      helm install jan-server .
-    ```
-2. Verify and modify the configuration options by accessing the `values.yaml` file on `/jan/charts/server`. The following is the example resource created by Jan helm chart:
-    ![Jan server helm argo](./_assets/helm_resources.png)
-### Step 4: Access the Jan Server
-Once the Jan server runs on your Helm server, you can access it using your Instance's public IP address or domain name.
-1. Open a web browser and navigate to the Jan Server URL at `http://jan-server-service-web:1337`.
-  </Tabs.Tab>
-</Tabs>
-<Callout type="info"> 
-**RAG** feature is not yet supported in Docker mode with `s3fs`.
-</Callout>
-</Steps>
\ No newline at end of file
diff --git a/src/pages/docs/server-installation/onprem.mdx b/src/pages/docs/server-installation/onprem.mdx
deleted file mode 100644
index e63a0c26..00000000
--- a/src/pages/docs/server-installation/onprem.mdx
+++ /dev/null
@@ -1,292 +0,0 @@
----
-title: On-Premise
-description: A step-by-step guide on installing the Jan server.
-keywords:
-  [
-    Jan,
-    Customizable Intelligence, LLM,
-    local AI,
-    privacy focus,
-    free and open source,
-    private and offline,
-    conversational AI,
-    no-subscription fee,
-    large language models,
-    quickstart,
-    getting started,
-    using AI model,
-    installation,
-    "server",
-    "web"
-  ]
----
-
-import { Tabs, Callout, Steps } from 'nextra/components'
-
-# On-Premise Installation
-To install Jan Server, follow the steps below:
-<Steps>
-### Step 1: Prepare Environment
-- Choose a machine with at least 16GB RAM, 8 CPU cores, and 100GB storage.
-- For better performance, you can use NVIDIA GPU.
-<Callout type="info">
-AMD GPU/ Intel Arc GPU are not supported yet.
-</Callout>
-### Step 2: Get Jan Server
-<Tabs items={['Linux Docker', 'Windows WSL2 Docker', 'Kubernetes - Helm']}>
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-      - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-      - NVIDIA driver 470.63.01 or higher
-      - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-
-2. Install Docker Engine and Docker Compose on Linux using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>
-      ```bash
-      curl -fsSL https://get.docker.com -o get-docker.sh
-      sudo sh ./get-docker.sh --dry-run
-      ```
-3. Download Jan `docker-compose.yml` file using the following command:
-      ```bash
-      curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-      ```
-### Step 3: Docker Configuration
-Once you have installed Docker Engine and Docker Compose, you must set up the docker profile and environment variables.
-
-The available Docker Compose profile and the environment variables are listed below:
-| Docker compose Profile | Description                                  |
-| ---------------------- | -------------------------------------------- |
-| `cpu-fs`               | Run Jan in CPU mode with the default file system |
-| `cpu-s3fs`             | Run Jan in CPU mode with S3 file system      |
-| `gpu-fs`               | Run Jan in GPU mode with the default file system |
-| `gpu-s3fs`             | Run Jan in GPU mode with S3 file system      |
-
-| Environment Variable    | Description                                                                                             |
-| ----------------------- | ------------------------------------------------------------------------------------------------------- |
-| `S3_BUCKET_NAME`        | S3 bucket name - leave blank for default file system                                                    |
-| `AWS_ACCESS_KEY_ID`     | AWS access key ID - leave blank for default file system                                                 |
-| `AWS_SECRET_ACCESS_KEY` | AWS secret access key - leave blank for default file system                                             |
-| `AWS_ENDPOINT`          | AWS endpoint URL - leave blank for default file system                                                  |
-| `AWS_REGION`            | AWS region - leave blank for default file system                                                        |
-| `API_BASE_URL`          | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
-
-### Step 4: Run Jan Server
-You can run the Jan server in two modes:
-- CPU
-- GPU
-#### Run Jan in CPU Mode
-Run Jan in CPU mode by using the following command:
-
-```bash
-# cpu mode with default file system
-docker compose --profile cpu-fs up -d
-
-# cpu mode with S3 file system
-docker compose --profile cpu-s3fs up -d
-```
-
-#### Run Jan in GPU mode
-
-1. Check CUDA compatibility with your NVIDIA driver by running `nvidia-smi` and check the CUDA version in the output:
-
-```bash
-nvidia-smi
-
-# Output
-+---------------------------------------------------------------------------------------+
-| NVIDIA-SMI 531.18                 Driver Version: 531.18       CUDA Version: 12.1     |
-|-----------------------------------------+----------------------+----------------------+
-| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
-| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
-|                                         |                      |               MIG M. |
-|=========================================+======================+======================|
-|   0  NVIDIA GeForce RTX 4070 Ti    WDDM | 00000000:01:00.0  On |                  N/A |
-|  0%   44C    P8               16W / 285W|   1481MiB / 12282MiB |      2%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   1  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:02:00.0 Off |                  N/A |
-|  0%   49C    P8               14W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   2  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:05:00.0 Off |                  N/A |
-| 29%   38C    P8               11W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-
-+---------------------------------------------------------------------------------------+
-| Processes:                                                                            |
-|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
-|        ID   ID                                                             Usage      |
-|=======================================================================================|
-```
-
-2.  Visit [NVIDIA NGC Catalog ](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of the image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
-
-3. Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g., change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
-
-4. Run the following command to start Jan in GPU mode:
-
-```bash
-# GPU mode with default file system
-docker compose --profile gpu-fs up -d
-
-# GPU mode with S3 file system
-docker compose --profile gpu-s3fs up -d
-```
-### Step 5: Access the Jan Server
-Once the Jan server runs, you can access it in Jan at `http://localhost:3000`.
-<Callout type="info"> 
-**RAG** feature is not yet supported in Docker mode with `s3fs`.
-</Callout>
-
-  </Tabs.Tab>
-
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-      - Windows 10 or higher is required to run Jan.
-      - WSL2 must run in Windows in Jan. Follow the instructions [here](https://learn.microsoft.com/en-us/windows/wsl/install) to install it.
-      - To enable GPU support, you will need:
-        - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-        - NVIDIA driver 470.63.01 or higher
-        - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-
- 2. Install Docker Engine and Docker Compose on WSL2 using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>    
-    ```bash
-    curl -fsSL https://get.docker.com -o get-docker.sh
-    sudo sh ./get-docker.sh --dry-run
-    ```
-3. Download Jan `docker-compose.yml` file using the following command:
-    ```bash
-    curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-    ```
-### Step 3: Docker Configuration
-Once you have installed Docker Engine and Docker Compose, you must set up the docker profile and environment variables.
-
-The available Docker Compose profile and the environment variables are listed below:
-| Docker compose Profile | Description                                  |
-| ---------------------- | -------------------------------------------- |
-| `cpu-fs`               | Run Jan in CPU mode with the default file system |
-| `cpu-s3fs`             | Run Jan in CPU mode with S3 file system      |
-| `gpu-fs`               | Run Jan in GPU mode with the default file system |
-| `gpu-s3fs`             | Run Jan in GPU mode with S3 file system      |
-
-| Environment Variable    | Description                                                                                             |
-| ----------------------- | ------------------------------------------------------------------------------------------------------- |
-| `S3_BUCKET_NAME`        | S3 bucket name - leave blank for default file system                                                    |
-| `AWS_ACCESS_KEY_ID`     | AWS access key ID - leave blank for default file system                                                 |
-| `AWS_SECRET_ACCESS_KEY` | AWS secret access key - leave blank for default file system                                             |
-| `AWS_ENDPOINT`          | AWS endpoint URL - leave blank for default file system                                                  |
-| `AWS_REGION`            | AWS region - leave blank for default file system                                                        |
-| `API_BASE_URL`          | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
-
-### Step 4: Run Jan Server
-You can run the Jan server in two modes:
-- CPU
-- GPU
-#### Run Jan in CPU Mode
-Run Jan in CPU mode by using the following command:
-
-```bash
-# cpu mode with default file system
-docker compose --profile cpu-fs up -d
-
-# cpu mode with S3 file system
-docker compose --profile cpu-s3fs up -d
-```
-
-#### Run Jan in GPU mode
-
-1. Check CUDA compatibility with your NVIDIA driver by running `nvidia-smi` and check the CUDA version in the output:
-
-```bash
-nvidia-smi
-
-# Output
-+---------------------------------------------------------------------------------------+
-| NVIDIA-SMI 531.18                 Driver Version: 531.18       CUDA Version: 12.1     |
-|-----------------------------------------+----------------------+----------------------+
-| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
-| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
-|                                         |                      |               MIG M. |
-|=========================================+======================+======================|
-|   0  NVIDIA GeForce RTX 4070 Ti    WDDM | 00000000:01:00.0  On |                  N/A |
-|  0%   44C    P8               16W / 285W|   1481MiB / 12282MiB |      2%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   1  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:02:00.0 Off |                  N/A |
-|  0%   49C    P8               14W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-|   2  NVIDIA GeForce GTX 1660 Ti    WDDM | 00000000:05:00.0 Off |                  N/A |
-| 29%   38C    P8               11W / 120W|      0MiB /  6144MiB |      0%      Default |
-|                                         |                      |                  N/A |
-+-----------------------------------------+----------------------+----------------------+
-
-+---------------------------------------------------------------------------------------+
-| Processes:                                                                            |
-|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
-|        ID   ID                                                             Usage      |
-|=======================================================================================|
-```
-
-2.  Visit [NVIDIA NGC Catalog ](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of the image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
-
-3. Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g., change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
-
-4. Run the following command to start Jan in GPU mode:
-
-```bash
-# GPU mode with default file system
-docker compose --profile gpu-fs up -d
-
-# GPU mode with S3 file system
-docker compose --profile gpu-s3fs up -d
-```
-### Step 5: Access the Jan Server
-Once the Jan server runs, you can access it in Jan at `http://localhost:3000`.
-<Callout type="info"> 
-**RAG** feature is not yet supported in Docker mode with `s3fs`.
-</Callout>
-
-  </Tabs.Tab>
-  <Tabs.Tab>
-1. Before installing the Jan server, ensure that you have the following requirements:
-    - Windows 10 or higher is required to run Jan.
-    - WSL2 must run in Windows in Jan. Follow the instructions [here](https://learn.microsoft.com/en-us/windows/wsl/install) to install it.
-    - To enable GPU support, you will need:
-      - NVIDIA GPU with CUDA Toolkit 11.7 or higher
-      - NVIDIA driver 470.63.01 or higher
-      - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-      - [NVIDIA Device Plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin)
-2. Install Docker Engine and Docker Compose using the following command:
-<Callout type="info">
-To install Docker Engine on Ubuntu, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
-</Callout>
-```bash
-curl -fsSL https://get.docker.com -o get-docker.sh
-sudo sh ./get-docker.sh --dry-run
-```
-3. Download Jan `docker-compose.yml` file using the following command:
-```bash
-curl https://raw.githubusercontent.com/janhq/jan/dev/docker-compose.yml -o docker-compose.yml
-```
-### Step 3: Helm Installation
-1. Get Helm chart from Jan repository by using the following command:
-    ```bash
-      git clone https://github.com/janhq/jan.git
-      cd jan/charts/server/
-      helm install jan-server .
-    ```
-2. Verify and modify the configuration options by accessing the `values.yaml` file on `/jan/charts/server`. The following is the example resource created by Jan helm chart:
-    ![Jan server helm argo](./_assets/helm_resources.png)
-### Step 4: Access the Jan Server
-Once the Jan server runs on your Helm server, you can access it using your public IP address or domain name.
-1. Open a web browser and navigate to the Jan Server URL at `http://jan-server-service-web:1337`.
-  </Tabs.Tab>
-</Tabs>
-</Steps>
\ No newline at end of file