title | description |
---|---|
Join Services with a Secure Token |
This guide shows you how to join a Teleport instance to your cluster using a join token in order to proxy access to resources in your infrastructure. |
In this guide, we will show you how to register a Teleport process running one or more services to your cluster by presenting a join token.
In this approach, you declare your intention to register a new Teleport process, and Teleport generates a secure token that the process uses to establish a trust relationship with the Teleport cluster.
(!docs/pages/includes/edition-prereqs-tabs.mdx!)
-
A Linux server that you will use to host your Teleport process, e.g., a virtual machine or Docker container with an image based on a Linux distribution.
In this guide, we will show you how to register a Teleport SSH Service instance. This approach also applies to other Teleport services, like the Proxy Service, Kubernetes Service, Database Service, and other services for accessing resources in your infrastructure.
The join token method works if a cluster includes a single Proxy Service instance as well as multiple Proxy Service instances behind a load balancer (LB) or a DNS entry with multiple values. If there are multiple Proxy Service instances, a Teleport process joining the cluster establishes a tunnel to every Proxy Service instance.
If you are using a load balancer, it must use a round-robin or a similar balancing algorithm. Do not use sticky load balancing algorithms (i.e., "session affinity") with Teleport Proxy Service instances.
If you are using a Docker container, note that this guide assumes that your Linux host has
curl
andsudo
installed.
(!docs/pages/includes/tctl.mdx!)
Install Teleport on your Linux host.
(!docs/pages/includes/install-linux.mdx!)
In this section, we will join your Teleport process to your cluster by:
- Obtaining a join token
- Running your Teleport process with the join token
Teleport only allows access to resources in your infrastructure via Teleport processes that that have joined the cluster.
On your local machine, use the tctl
tool to generate a new token. In the
following example, a new token is created with a TTL of five minutes:
# Generate a short-lived invite token for a new Teleport SSH Service instance:
$ tctl tokens add --ttl=5m --type=node
The invite token: (=presets.tokens.first=)
This token will expire in 5 minutes.
Run this on the new node to join the cluster:
> teleport start \
--roles=node \
--token=(=presets.tokens.first=) \
--ca-pin=(=presets.ca_pin=) \
--auth-server=192.0.2.0:3025
Please note:
- This invitation token will expire in 5 minutes
- 192.0.2.0:3025 must be reachable from the new node
In this command, we assigned the token the node
type, indicating that it will
belong to an SSH Service instance.
Copy the token so you can use it later in this guide. You can ignore the rest of
the tctl tokens add
output.
Here are all the values we support for --type
flag when creating a join token:
Role | Teleport Service |
---|---|
app |
Application Service |
auth |
Auth Service |
bot |
Machine ID |
db |
Database Service |
discovery |
Discovery Service |
kube |
Kubernetes Service |
node |
SSH Service |
proxy |
Proxy Service |
windowsdesktop |
Windows Desktop Service |
Administrators can generate tokens as they are needed. A Teleport process can
use a token multiple times until its time to live (TTL) expires, with the
exception of tokens with the bot
type, which are used by Machine ID.
To list all of the tokens you have generated, run the following command:
$ tctl tokens ls
Token Type Labels Expiry Time (UTC)
-------------------------------- ---- ------ --------------------------
(=presets.tokens.first=) Node 30 Mar 23 18:15 UTC (2m8s)
Static tokens are defined ahead of time by an administrator and stored in the Auth Service's config file:
# Config section in `/etc/teleport.yaml` file for the Auth Service
auth_service:
enabled: true
tokens:
# This static token allows new hosts to join the cluster as "proxy" or "node"
- 'proxy,node:secret-token-value'
# A token can also be stored in a file. In this example the token for adding
# new Auth Service instances are stored in /path/to/tokenfile
- 'auth:/path/to/tokenfile'
Execute the following command on the host running your new Teleport process to
add it to a cluster. Assign to the token you generated
earlier and to the host and web port of your
Teleport Proxy Service or Teleport Enterprise Cloud tenant (e.g.,
teleport.example.com:443
):
$ sudo teleport configure \
--roles=node \
--token=<Var name="join-token" /> \
--proxy=<Var name="proxy-address" /> \
-o file
For SSH Service instances, you can also run teleport node configure
instead of
teleport configure
. This way, you can exclude the --roles=node
flag from the
command.
So far, this guide has assumed that you are joining your new Teleport process to your cluster by connecting it to the Proxy Service. (This is the only possibility in Teleport Enterprise Cloud.) Depending on the design of your infrastructure, you may need to connect your new Teleport process directly to the Auth Service.
Only connect Teleport processes directly to the Auth Service if no other join methods are suitable, as we recommend exposing the Auth Service to as few sources of ingress traffic as possible.
The Teleport process joining the cluster must also establish trust with the Auth Service in order to prevent an attacker from hijacking the address of your Auth Service host.
To do this, you supply your new Teleport process with a secure hash value generated by the Auth Service's certificate authority, called a CA pin. This way, an attacker cannot easily forge a private key to trick your Teleport process into communicating with a malicious service.
On you local machine, retrieve the CA pin of the Auth Service:
$ tctl status
Cluster teleport.example.com
Version 12.1.1
host CA never updated
user CA never updated
db CA never updated
openssh CA never updated
jwt CA never updated
saml_idp CA never updated
CA pin (=presets.ca_pin=)
Copy the CA pin and assign it to the value of .
The CA pin becomes invalid if a Teleport administrator performs the CA rotation
by executing tctl auth rotate
.
Run the following command to configure your Teleport process instead of the
teleport configure
command we showed you earlier. Assign to the host and gRPC port of your Auth Service host, e.g.,
teleport.example.com:3025
.
$ sudo teleport configure \
--roles=node \
--token=<Var name="join-token" /> \
--auth-server=<Var name="auth-service" /> \
-o file
Next, edit the Teleport configuration file, /etc/teleport.yaml
, assigning the
CA pin (the teleport.ca_pin
field) to the one you copied earlier:
$ sudo sed -i 's| ca_pin: ""| ca_pin: "<Var name="ca-pin"> />"|' /etc/teleport.yaml
(!docs/pages/includes/start-teleport.mdx!)
If you followed this guide with a local Docker container, execute the following command within your container to run your new Teleport process in the foreground:
$ teleport start
As new services come online, they start sending heartbeat requests every few seconds to the Auth Service. This allows users to explore cluster membership and size.
Run the following command on your local machine to see all of the Teleport SSH Service instances in your cluster:
$ tctl nodes ls
Host UUID Public Address Labels Version
------------- --------------------- -------------- ---------------------- -------
1f58429134c4 6805dda3-779e-493b... hostname=1f58429134c4 (=teleport.version=)
You can revoke a join token to prevent a Teleport process from using it.
Run the following command on your local machine to create a token for a new Proxy Service:
$ tctl nodes add --ttl=5m --roles=proxy
# The invite token: (=presets.tokens.first=).
# This token will expire in 5 minutes.
#
# Run this on the new node to join the cluster:
#
# > teleport start \
# --roles=proxy \
# --token=(=presets.tokens.first=) \
# --ca-pin=(=presets.ca_pin=) \
# --auth-server=123.123.123.123:443
#
# Please note:
#
# - This invitation token will expire in 5 minutes
# - 123.123.123.123 must be reachable from the new node
Next, run the following command to see a list of outstanding tokens:
$ tctl tokens ls
Token Type Labels Expiry Time (UTC)
-------------------------------- ----- ------ ---------------------------
(=presets.tokens.first=) Node 30 Mar 23 18:20 UTC (36s)
(=presets.tokens.second=) Proxy 30 Mar 23 18:24 UTC (4m39s)
The output of tctl tokens ls
includes tokens used for adding users alongside
tokens used for adding Teleport processes to your cluster.
You generated the token with the Node
role earlier in this guide to invite a
new Teleport process to this cluster. The second token is the one you generated
for a Proxy Service instance.
Tokens created via tctl
can be deleted (revoked) via the tctl tokens rm
command. Copy the second token from the output above and run the following
command to delete it, assigning the token to .
$ tctl tokens rm <Var name="token-to-delete"/>
# Token (=presets.tokens.first=) has been deleted
- If you have workloads split across different networks or clouds, we recommend setting up trusted clusters. Read how to get started in Configure Trusted Clusters.