diff --git a/docs/modules/usage/getting-started.md b/docs/modules/usage/getting-started.md
index a8badef8125f..e20b0f549586 100644
--- a/docs/modules/usage/getting-started.md
+++ b/docs/modules/usage/getting-started.md
@@ -40,9 +40,10 @@ After running the command above, you'll find OpenHands running at [http://localh
The agent will have access to the `./workspace` folder to do its work. You can copy existing code here, or change `WORKSPACE_BASE` in the
command to point to an existing folder.
-Upon launching OpenHands, you'll see a settings modal. You must select an LLM backend using `Model`, and enter a corresponding `API Key`.
+Upon launching OpenHands, you'll see a settings modal. You must select an `LLM Provider` and `LLM Model` and enter a corresponding `API Key`.
These can be changed at any time by selecting the `Settings` button (gear icon) in the UI.
-If the required `Model` does not exist in the list, you can toggle `Use custom model` and manually enter it in the text box.
+If the required `LLM Model` does not exist in the list, you can toggle `Advanced Options` and manually enter it in the `Custom Model` text box.
+The `Advanced Options` also allow you to specify a `Base URL` if required.
diff --git a/docs/modules/usage/how-to/cli-mode.md b/docs/modules/usage/how-to/cli-mode.md
index c619560af523..8569e235fbcd 100644
--- a/docs/modules/usage/how-to/cli-mode.md
+++ b/docs/modules/usage/how-to/cli-mode.md
@@ -41,7 +41,7 @@ LLM_MODEL="anthropic/claude-3-5-sonnet-20240620"
3. Set `LLM_API_KEY` to your API key:
```bash
-LLM_API_KEY="abcde"
+LLM_API_KEY="sk_test_12345"
```
4. Run the following Docker command:
diff --git a/docs/modules/usage/how-to/headless-mode.md b/docs/modules/usage/how-to/headless-mode.md
index ea620c65ee6a..87d190d2fc6a 100644
--- a/docs/modules/usage/how-to/headless-mode.md
+++ b/docs/modules/usage/how-to/headless-mode.md
@@ -35,7 +35,7 @@ LLM_MODEL="anthropic/claude-3-5-sonnet-20240620"
3. Set `LLM_API_KEY` to your API key:
```bash
-LLM_API_KEY="abcde"
+LLM_API_KEY="sk_test_12345"
```
4. Run the following Docker command:
diff --git a/docs/modules/usage/llms/azure-llms.md b/docs/modules/usage/llms/azure-llms.md
index a3f269f804c3..b770bfc50338 100644
--- a/docs/modules/usage/llms/azure-llms.md
+++ b/docs/modules/usage/llms/azure-llms.md
@@ -1,55 +1,43 @@
# Azure OpenAI LLM
-## Completion
-
OpenHands uses LiteLLM for completion calls. You can find their documentation on Azure [here](https://docs.litellm.ai/docs/providers/azure).
-### Azure openai configs
+## Azure OpenAI Configuration
-When running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:
+When running OpenHands, you'll need to set the following environment variable using `-e` in the
+[docker run command](/modules/usage/getting-started#installation):
```
-LLM_BASE_URL="" # e.g. "https://openai-gpt-4-test-v-1.openai.azure.com/"
-LLM_API_KEY=""
-LLM_MODEL="azure/"
-LLM_API_VERSION="" # e.g. "2024-02-15-preview"
+LLM_API_VERSION="" # e.g. "2023-05-15"
```
Example:
```bash
-docker run -it \
---pull=always \
--e SANDBOX_USER_ID=$(id -u) \
--e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
--e LLM_BASE_URL="x.openai.azure.com" \
--e LLM_API_VERSION="2024-02-15-preview" \
--v $WORKSPACE_BASE:/opt/workspace_base \
--v /var/run/docker.sock:/var/run/docker.sock \
--p 3000:3000 \
---add-host host.docker.internal:host-gateway \
---name openhands-app-$(date +%Y%m%d%H%M%S) \
-ghcr.io/all-hands-ai/openhands:main
+docker run -it --pull=always \
+ -e LLM_API_VERSION="2023-05-15"
+ ...
```
-You can also set the model and API key in the OpenHands UI through the Settings.
+Then set the following in the OpenHands UI through the Settings:
:::note
-You can find your ChatGPT deployment name on the deployments page in Azure. It could be the same with the chat model
-name (e.g. 'GPT4-1106-preview'), by default or initially set, but it doesn't have to be the same. Run OpenHands,
-and when you load it in the browser, go to Settings and set model as above: "azure/<your-actual-gpt-deployment-name>".
-If it's not in the list, you can open the Settings modal, switch to "Custom Model", and enter your model name.
+You will need your ChatGPT deployment name which can be found on the deployments page in Azure. This is referenced as
+<deployment-name> below.
:::
+* Enable `Advanced Options`
+* `Custom Model` to azure/<deployment-name>
+* `Base URL` to your Azure API Base URL (Example: https://example-endpoint.openai.azure.com)
+* `API Key`
+
## Embeddings
OpenHands uses llama-index for embeddings. You can find their documentation on Azure [here](https://docs.llamaindex.ai/en/stable/api_reference/embeddings/azure_openai/).
-### Azure openai configs
-
-The model used for Azure OpenAI embeddings is "text-embedding-ada-002".
-You need the correct deployment name for this model in your Azure account.
+### Azure OpenAI Configuration
-When running OpenHands in Docker, set the following environment variables using `-e`:
+When running OpenHands, set the following environment variables using `-e` in the
+[docker run command](/modules/usage/getting-started#installation):
```
LLM_EMBEDDING_MODEL="azureopenai"
diff --git a/docs/modules/usage/llms/google-llms.md b/docs/modules/usage/llms/google-llms.md
index 5ead0ebc45ef..74dd8d931bea 100644
--- a/docs/modules/usage/llms/google-llms.md
+++ b/docs/modules/usage/llms/google-llms.md
@@ -1,28 +1,30 @@
# Google Gemini/Vertex LLM
-## Completion
-
OpenHands uses LiteLLM for completion calls. The following resources are relevant for using OpenHands with Google's LLMs:
- [Gemini - Google AI Studio](https://docs.litellm.ai/docs/providers/gemini)
- [VertexAI - Google Cloud Platform](https://docs.litellm.ai/docs/providers/vertex)
-### Gemini - Google AI Studio Configs
+## Gemini - Google AI Studio Configs
-To use Gemini through Google AI Studio when running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:
+When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
+* `LLM Provider` to `Gemini`
+* `LLM Model` to the model you will be using.
+If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (i.e. gemini/<model-name>).
+* `API Key`
-```
-GEMINI_API_KEY=""
-LLM_MODEL="gemini/gemini-1.5-pro"
-```
+## VertexAI - Google Cloud Platform Configs
-### Vertex AI - Google Cloud Platform Configs
-
-To use Vertex AI through Google Cloud Platform when running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:
+To use Vertex AI through Google Cloud Platform when running OpenHands, you'll need to set the following environment
+variables using `-e` in the [docker run command](/modules/usage/getting-started#installation):
```
GOOGLE_APPLICATION_CREDENTIALS=""
VERTEXAI_PROJECT=""
VERTEXAI_LOCATION=""
-LLM_MODEL="vertex_ai/"
```
+
+Then set the following in the OpenHands UI through the Settings:
+* `LLM Provider` to `VertexAI`
+* `LLM Model` to the model you will be using.
+If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (i.e. vertex_ai/<model-name>).
diff --git a/docs/modules/usage/llms/llms.md b/docs/modules/usage/llms/llms.md
index 9dcfccc2a188..daa4e1450adb 100644
--- a/docs/modules/usage/llms/llms.md
+++ b/docs/modules/usage/llms/llms.md
@@ -24,22 +24,25 @@ also encourage you to open a PR to share your setup process to help others using
For a full list of the providers and models available, please consult the
[litellm documentation](https://docs.litellm.ai/docs/providers).
-## Local and Open Source Models
-
+:::note
Most current local and open source models are not as powerful. When using such models, you may see long
wait times between messages, poor responses, or errors about malformed JSON. OpenHands can only be as powerful as the
models driving it. However, if you do find ones that work, please add them to the verified list above.
+:::
## LLM Configuration
-The `LLM_MODEL` environment variable controls which model is used in programmatic interactions.
-But when using the OpenHands UI, you'll need to choose your model in the settings window.
+The following can be set in the OpenHands UI through the Settings:
+* `LLM Provider`
+* `LLM Model`
+* `API Key`
+* `Base URL` (through `Advanced Settings`)
-The following environment variables might be necessary for some LLMs/providers:
+There are some settings that may be necessary for some LLMs/providers that cannot be set through the UI. Instead, these
+can be set through environment variables passed to the [docker run command](/modules/usage/getting-started#installation)
+using `-e`:
-* `LLM_API_KEY`
* `LLM_API_VERSION`
-* `LLM_BASE_URL`
* `LLM_EMBEDDING_MODEL`
* `LLM_EMBEDDING_DEPLOYMENT_NAME`
* `LLM_DROP_PARAMS`
diff --git a/docs/modules/usage/llms/openai-llms.md b/docs/modules/usage/llms/openai-llms.md
index 07c8e547a6c0..ea4d258868d4 100644
--- a/docs/modules/usage/llms/openai-llms.md
+++ b/docs/modules/usage/llms/openai-llms.md
@@ -1,23 +1,16 @@
# OpenAI
-OpenHands uses [LiteLLM](https://www.litellm.ai/) to make calls to OpenAI's chat models. You can find their full documentation on OpenAI chat calls [here](https://docs.litellm.ai/docs/providers/openai).
+OpenHands uses LiteLLM to make calls to OpenAI's chat models. You can find their full documentation on OpenAI chat calls [here](https://docs.litellm.ai/docs/providers/openai).
## Configuration
-When running the OpenHands Docker image, you'll need to choose a model and set your API key in the OpenHands UI through the Settings.
-
-To see a full list of OpenAI models that LiteLLM supports, please visit https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models.
-
-To find or create your OpenAI Project API Key, please visit https://platform.openai.com/api-keys.
+When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
+* `LLM Provider` to `OpenAI`
+* `LLM Model` to the model you will be using.
+[Visit **here** to see a full list of OpenAI models that LiteLLM supports.](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models)
+If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (i.e. openai/<model-name>).
+* `API Key`. To find or create your OpenAI Project API Key, [see **here**](https://platform.openai.com/api-keys).
## Using OpenAI-Compatible Endpoints
Just as for OpenAI Chat completions, we use LiteLLM for OpenAI-compatible endpoints. You can find their full documentation on this topic [here](https://docs.litellm.ai/docs/providers/openai_compatible).
-
-When running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:
-
-```sh
-LLM_BASE_URL="" # e.g. "http://0.0.0.0:3000"
-```
-
-Then set your model and API key in the OpenHands UI through the Settings.
diff --git a/docs/modules/usage/troubleshooting/troubleshooting.md b/docs/modules/usage/troubleshooting/troubleshooting.md
index 207e18979ef7..e90ce5809a09 100644
--- a/docs/modules/usage/troubleshooting/troubleshooting.md
+++ b/docs/modules/usage/troubleshooting/troubleshooting.md
@@ -9,13 +9,15 @@ We'll try to make the install process easier, but for now you can look for your
If you find more information or a workaround for one of these issues, please open a *PR* to add details to this file.
:::tip
-If you're running on Windows and having trouble, check out our [Notes for Windows and WSL users](troubleshooting/windows).
+OpenHands only supports Windows via [WSL](https://learn.microsoft.com/en-us/windows/wsl/install).
+Please be sure to run all commands inside your WSL terminal.
+Check out [Notes for WSL on Windows Users](troubleshooting/windows) for some troubleshooting guides.
:::
## Common Issues
* [Unable to connect to Docker](#unable-to-connect-to-docker)
-* [Unable to connect to SSH box](#unable-to-connect-to-ssh-box)
+* [Unable to connect to LLM](#unable-to-connect-to-llm)
* [404 Resource not found](#404-resource-not-found)
* [`make build` getting stuck on package installations](#make-build-getting-stuck-on-package-installations)
* [Sessions are not restored](#sessions-are-not-restored)
@@ -45,31 +47,6 @@ OpenHands uses a Docker container to do its work safely, without potentially bre
* If you are on a Mac, check the [permissions requirements](https://docs.docker.com/desktop/mac/permission-requirements/) and in particular consider enabling the `Allow the default Docker socket to be used` under `Settings > Advanced` in Docker Desktop.
* In addition, upgrade your Docker to the latest version under `Check for Updates`
----
-### Unable to connect to SSH box
-
-[GitHub Issue](https://github.com/All-Hands-AI/OpenHands/issues/1156)
-
-**Symptoms**
-
-```python
-self.shell = DockerSSHBox(
-...
-pexpect.pxssh.ExceptionPxssh: Could not establish connection to host
-```
-
-**Details**
-
-By default, OpenHands connects to a running container using SSH. On some machines,
-especially Windows, this seems to fail.
-
-**Workarounds**
-
-* Restart your computer (sometimes it does work)
-* Be sure to have the latest versions of WSL and Docker
-* Check that your distribution in WSL is up to date as well
-* Try [this reinstallation guide](https://github.com/All-Hands-AI/OpenHands/issues/1156#issuecomment-2064549427)
-
---
### Unable to connect to LLM
diff --git a/docs/modules/usage/troubleshooting/windows.md b/docs/modules/usage/troubleshooting/windows.md
index ce2d816e3e66..c0196b75138b 100644
--- a/docs/modules/usage/troubleshooting/windows.md
+++ b/docs/modules/usage/troubleshooting/windows.md
@@ -1,4 +1,4 @@
-# Notes for Windows and WSL Users
+# Notes for WSL on Windows Users
OpenHands only supports Windows via [WSL](https://learn.microsoft.com/en-us/windows/wsl/install).
Please be sure to run all commands inside your WSL terminal.
diff --git a/docs/static/img/settings-screenshot.png b/docs/static/img/settings-screenshot.png
index 3ba6189b6605..2b9a3a3dd259 100644
Binary files a/docs/static/img/settings-screenshot.png and b/docs/static/img/settings-screenshot.png differ