This document describes how to install and configure the SATOSA proxy.
A pre-built Docker image is accessible at the Docker Hub, and is the recommended ways of running the proxy.
SATOSA requires Python 3.4 (or above), and the following packages on Ubuntu:
apt-get install libffi-dev libssl-dev xmlsec1
-
Download the SATOSA proxy project as a compressed archive and unpack it to
<satosa_path>
. -
Install the application:
pip install <satosa_path>
Alternatively the application can be installed directly from PyPI (pip install satosa
), or the Docker image can be used.
SATOSA is configured using YAML.
All default configuration files, as well as an example WSGI application for the proxy, can be found in the example directory.
The default YAML syntax is extended to include the capability to resolve environment variables. The following tags are used to achieve this:
- The
!ENV
tag
The !ENV
tag is followed by a string that denotes the environment variable
name. It will be replaced by the value of the environment variable with the
same name.
In the example below LDAP_BIND_PASSWORD
will, at runtime, be replaced with
the value from the process environment variable of the same name. If the
process environment has been set with LDAP_BIND_PASSWORD=secret_password
then
the configuration value for bind_password
will be secret_password
.
bind_password: !ENV LDAP_BIND_PASSWORD
- The
!ENVFILE
tag
The !ENVFILE
tag is followed by a string that denotes the environment
variable name. It will be replaced by the value of the environment variable
with the same name.
In the example below LDAP_BIND_PASSWORD_FILE
will, at runtime, be replaced
with the value from the process environment variable of the same name. If the
process environment has been set with
LDAP_BIND_PASSWORD_FILE=/etc/satosa/secrets/ldap.txt
then the configuration
value for bind_password
will be secret_password
.
bind_password: !ENVFILE LDAP_BIND_PASSWORD_FILE
SATOSA proxy configuration: proxy_conf.yaml.example
Parameter name | Data type | Example value | Description |
---|---|---|---|
BASE |
string | https://proxy.example.com |
base url of the proxy |
COOKIE_STATE_NAME |
string | satosa_state |
name of the cookie SATOSA uses for preserving state between requests |
CONTEXT_STATE_DELETE |
bool | True |
controls whether SATOSA will delete the state cookie after receiving the authentication response from the upstream IdP |
STATE_ENCRYPTION_KEY |
string | 52fddd3528a44157 |
key used for encrypting the state cookie, will be overridden by the environment variable SATOSA_STATE_ENCRYPTION_KEY if it is set |
INTERNAL_ATTRIBUTES |
string | example/internal_attributes.yaml |
path to attribute mapping |
CUSTOM_PLUGIN_MODULE_PATHS |
string[] | [example/plugins/backends, example/plugins/frontends] |
list of directory paths containing any front-/backend plugin modules |
BACKEND_MODULES |
string[] | [openid_connect_backend.yaml, saml2_backend.yaml] |
list of plugin configuration file paths, describing enabled backends |
FRONTEND_MODULES |
string[] | [saml2_frontend.yaml, openid_connect_frontend.yaml] |
list of plugin configuration file paths, describing enabled frontends |
MICRO_SERVICES |
string[] | [statistics_service.yaml] |
list of plugin configuration file paths, describing enabled microservices |
LOGGING |
dict | see Python logging.conf | optional configuration of application logging |
Attribute mapping configuration: internal_attributes.yaml
The values directly under the attributes
key are the internal attribute names.
Every internal attribute has a map of profiles, which in turn has a list of
external attributes names which should be mapped to the internal attributes.
If multiple external attributes are specified under a profile, the proxy will store all attribute values from the external attributes as a list in the internal attribute.
Sometimes the external attributes are nested/complex structures. One example is the address claim in OpenID connect which consists of multiple sub-fields, e.g.:
"address": {
"formatted": "100 Universal City Plaza, Hollywood CA 91608, USA",
"street_address": "100 Universal City Plaza",
"locality": "Hollywood",
"region": "CA",
"postal_code": "91608",
"country": "USA",
}
In this case the proxy accepts a dot-separated string denoting which external
attribute to use, e.g. address.formatted
will access the attribute value
"100 Universal City Plaza, Hollywood CA 91608, USA"
.
Example
attributes:
mail:
openid: [email]
saml: [mail, emailAdress, email]
address:
openid: [address.formatted]
saml: [postaladdress]
This example defines two attributes, mail
and address
, internal to the proxy. These attributes will be accessible to
any plugin (i.e. front- and backends) in the proxy.
Each internal attribute has a mapping for two different profiles, openid
and saml
. The mapping between received
attributes (in the proxy backend) <-> internal <-> returned attributes (from the proxy frontend) is defined as:
- Any plugin using the
openid
profile will use the attribute value fromemail
delivered from the target provider as the value formail
. - Any plugin using the
saml
profile will use the attribute value frommail
,emailAdress
andemail
depending on which attributes are delivered by the target provider as the value formail
. - Any plugin using the
openid
profile will use the attribute value under the keyformatted
in theaddress
attribute delivered by the target provider. - Any plugin using the
saml
profile will use the attribute value frompostaladdress
delivered from the target provider as the value foraddress
.
The subject identifier generated by the backend module can be overridden by
specifying a list of internal attribute names under the user_id_from_attrs
key.
The attribute values of the attributes specified in this list will be
concatenated and used as the subject identifier.
To store the subject identifier in a specific internal attribute, the internal
attribute name can be specified in user_id_to_attr
.
When the ALService is used for account
linking, the user_id_to_attr
configuration parameter should be set, since that
service will overwrite the subject identifier generated by the proxy.
The authentication protocol specific communication is handled by different plugins, divided into frontends (receiving requests from clients) and backends (sending requests to target providers).
Both name
and module
must be specified in all plugin configurations (frontends, backends, and micro services).
The name
must be unique to ensure correct functionality, and the module
must be the fully qualified name of an
importable Python module.
Common configuration parameters:
Parameter name | Data type | Example value | Description |
---|---|---|---|
organization |
dict | {display_name: Example Identities, name: Example Identities Organization, url: https://www.example.com} |
information about the organization, will be published in the SAML metadata |
contact_person |
dict[] | {contact_type: technical, given_name: Someone Technical, email_address: [email protected]} |
list of contact information, will be published in the SAML metadata |
key_file |
string | pki/key.pem |
path to private key used for signing(backend)/decrypting(frontend) SAML2 assertions |
cert_file |
string | pki/cert.pem |
path to certificate for the public key associated with the private key in key_file |
metadata["local"] |
string[] | [metadata/entity.xml] |
list of paths to metadata for all service providers (frontend)/identity providers (backend) communicating with the proxy |
attribute_profile |
string | saml |
attribute profile to use for mapping attributes from/to response |
entityid_endpoint |
bool | true |
whether entityid should be used as a URL that serves the metadata xml document |
acr_mapping |
dict | None |
custom Authentication Context Class Reference |
The metadata could be loaded in multiple ways in the table above it's loaded from a static file by using the key "local". It's also possible to load read the metadata from a remote URL.
Examples:
Metadata from local file:
"metadata":
local: [idp.xml]
Metadata from remote URL:
"metadata": {
"remote":
- url:https://kalmar2.org/simplesaml/module.php/aggregator/?id=kalmarcentral2&set=saml2
cert:null
}
For more detailed information on how you could customize the SAML entities, see the documentation of the underlying library pysaml2.
SAML2 frontends and backends can provide a custom (configurable) Authentication Context Class Reference.
For the frontend this is defined in the AuthnStatement
of the authentication response, while,
for the backend this is defined in the AuthnRequest
.
This can be used to describe for example the Level of Assurance, as described for example by eIDAS.
The AuthnContextClassRef
(ACR) can be specified per target provider in a mapping under the
configuration parameter acr_mapping
. The mapping must contain a default ACR value under the key ""
(empty string), each other ACR value specific per target provider is specified with key-value pairs, where the
key is the target providers identifier (entity id for SAML IdP behind SAML2 backend, authorization endpoint
URL for OAuth AS behind OAuth backend, and issuer URL for OpenID Connect OP behind OpenID Connect backend).
If no acr_mapping
is provided in the configuration, the ACR received from the backend plugin will
be used instead. This means that when using a SAML2 backend, the ACR provided by the target
provider will be preserved, and when using a OAuth or OpenID Connect backend, the ACR will be
urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified
.
Example
config:
[...]
acr_mapping:
"": default-LoA
"https://accounts.google.com": LoA1
The SAML2 frontend act as a SAML Identity Provider (IdP), accepting authentication requests from SAML Service Providers (SP). The default configuration file can be found here.
The SAML2 frontend comes in three different flavors:
-
The SAMLFrontend module acts like a single IdP, and hides all target providers. This enables the proxy to support SP's which only support communication with a single IdP, while the proxy will seamlessly communicate with multiple target providers. The metadata for the published IdP will contain one Single Sign On Location for each target provider.
The following flow diagram shows the communication:
SP -> proxy SAML SSO location -> target IdP
For the simple case where an SP does not support discovery it's also possible to delegate the discovery to the
SAMLBackend
(see below), which would enable the following communication flow:SP -> SAMLFrontend -> SAMLBackend -> discovery to select target IdP -> target IdP
-
The SAMLMirrorFrontend module mirrors each target provider as a separate entity in the SAML metadata. In this proxy this is handled with dynamic entity id's, encoding the target provider. This allows external discovery services to present the mirrored providers transparently, as separate entities in its UI. The following flow diagram shows the communcation:
SP -> optional discovery service -> selected proxy SAML entity -> target IdP
-
The SAMLVirtualCoFrontend module enables multiple IdP frontends, each with its own distinct entityID and SSO endpoints, and each representing a distinct collaborative organization or CO. An example configuration can be found here.
The following flow diagram shows the communication:
SP -> Virtual CO SAMLFrontend -> SAMLBackend -> optional discovery service -> target IdP
In addition to respecting for example entity categories from the SAML metadata, the SAML frontend can also further
restrict the attribute release with the custom_attribute_release
configuration parameter based on the SP entity id.
To exclude any attribute, just include its friendly name in the exclude list per SP.
In the following example the given name is never released from the IdP with entity id "idp-entity-id1"
to the SP
with entity id "sp-entity-id1"
:
config:
idp_config: [...]
custom_attribute_release:
idp-entity-id1
sp-entity-id1:
exclude: ["givenName"]
The custom_attribute_release mechanism supports defaults based on idp and sp entity Id by specifying "" or "default" as the key in the dict. For instance in order to exclude givenName for any sp or idp do this:
config:
idp_config: [...]
custom_attribute_release:
"default":
"":
exclude: ["givenName"]
Some settings related to how a SAML response is formed can be overriden on a per-instance or a per-SP basis. This example summarizes the most common settings (hopefully self-explanatory) with their defaults:
config:
idp_config:
service:
idp:
policy:
default:
sign_response: True
sign_assertion: False
sign_alg: "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"
digest_alg: "http://www.w3.org/2001/04/xmlenc#sha256"
<sp entityID>:
...
Overrides per SP entityID is possible by using the entityID as a key instead of the "default" key in the yaml structure. The most specific key takes presedence. If no policy overrides are provided the defaults above are used.
The SAML2 backend act as a SAML Service Provider (SP), making authentication requests to SAML Identity Providers (IdP). The default configuration file can be found here.
The SAML backend can indicate which Name ID format it wants by specifying the key
name_id_format
in the SP entity configuration in the backend plugin configuration:
config:
sp_config:
service:
sp:
name_id_format: urn:oasis:names:tc:SAML:2.0:nameid-format:transient
To allow the user to choose which target provider they want to authenticate with, the configuration
parameter disco_srv
, must be specified if the metadata given to the backend module contains more than one IdP:
config:
sp_config: [...]
disco_srv: http://disco.example.com
By default when the SAML frontend receives a SAML authentication request
with ForceAuthn
set to True
, this information is not mirrored in the SAML
authentication request that is generated by the SAML backend towards the
upstream identity provider. If the configuration option
mirror_force_authn
is set to True
, then the default behaviour changes
and the SAML backend will set ForceAuthn
to true when it proxies a SAML
authentication request with ForceAuthn
set to True
.
The default behaviour is False
.
config:
mirror_force_authn: True
[...]
In the classic flow, the user is asked to select their home organization to
authenticate to. The memorize_idp
configuration option controls whether
the user will have to always select a target provider when a discovery service
is configured. If the parameter is set to True
(and ForceAuthn
is not set),
the proxy will remember and reuse the selected target provider for the duration
that the state cookie is valid. If ForceAuthn
is set, then the
use_memorized_idp_when_force_authn
configuration option can overide
this property and still reuse the selected target provider.
The default behaviour is False
.
config:
memorize_idp: True
[...]
The use_memorized_idp_when_force_authn
configuration option controls
whether the user will skip the configured discovery service when the SP sends a
SAML authentication request with ForceAuthn
set to True
but the proxy has
memorized the user's previous selection.
The default behaviour is False
.
config:
memorize_idp: True
use_memorized_idp_when_force_authn: True
[...]
The dynamic_requested_attributes
option can be used to enable the requested
attributes eIDAS extension for requesting attributes from the IdP. These
attributes are populated dynamically using the attributes which were
requested from the frontend.
In order for this to work the frontend must populate the internal request's
attributes
field.
To enable this feature we need to provide a list of the friendly names of the attributes which we want to be able to request and whether they are required or not. E.g.:
config:
dynamic_requested_attributes:
- friendly_name: attr1
required: True
- friendly_name: attr2
required: False
[...]
The OpenID Connect backend acts as an OpenID Connect Relying Party (RP), making authentication requests to OpenID Connect Provider (OP). The default configuration file can be found here.
The example configuration assumes the OP supports discovery and dynamic client registration. When using an OP that only supports statically registered clients, see the default configuration for using Google as the OP and make sure to provide the redirect URI, constructed as described in the section about Google configuration below, in the static registration.
The OpenID Connect frontend acts as and OpenID Connect Provider (OP), accepting requests from OpenID Connect Relying Parties (RPs). The default configuration file can be found here.
As opposed to the other plugins, this plugin is NOT stateless (due to the nature of OpenID Connect using any other flow than "Implicit Flow"). However, the frontend supports using a MongoDB instance as its backend storage, so as long that's reachable from all machines it should not be a problem.
The configuration parameters available:
signing_key_path
: path to a RSA Private Key file (PKCS#1). MUST be configured.db_uri
: connection URI to MongoDB instance where the data will be persisted, if it's not specified all data will only be stored in-memory (not suitable for production use).provider
: provider configuration information. MUST be configured, the following configuration are supported:response_types_supported
(default:[id_token]
): list of all supported response types, see Section 3 of OIDC Core.subject_types_supported
(default:[pairwise]
): list of all supported subject identifier types, see Section 8 of OIDC Corescopes_supported
(default:[openid]
): list of all supported scopes, see Section 5.4 of OIDC Coreclient_registration_supported
(default:No
): boolean whether dynamic client registration is supported. If dynamic client registration is not supported all clients must exist in the MongoDB instance configured by thedb_uri
in the"clients"
collection of the"satosa"
database. The registration info must be stored using the client id as a key, and use the parameter names of a OIDC Registration Response.authorization_code_lifetime
: how long authorization codes should be valid, see defaultaccess_token_lifetime
: how long access tokens should be valid, see defaultrefresh_token_lifetime
: how long refresh tokens should be valid, if not specified no refresh tokens will be issued (which is default)refresh_token_threshold
: how long before expiration refresh tokens should be refreshed, if not specified refresh tokens will never be refreshed (which is default)
The other parameters should be left with their default values.
The social login plugins can be used as backends for the proxy, allowing the proxy to act as a client to the social login services.
The default configuration file can be found here.
The only parameters necessary to configure is the credentials,
(client_id
and client_secret
) issued by Google. See OAuth 2.0 credentials
for information on how to obtain them.
The redirect URI of the SATOSA proxy must be registered with Google. The
redirect URI to register with Google is the same as specified as the first
redirect URI in config["client"]["client_metadata"]["redirect_uris"]
.
It should use the available variables, <base_url>
and <name>
, where:
<base_url>
is the base url of the proxy as specified in theBASE
configuration parameter inproxy_conf.yaml
, e.g. "https://proxy.example.com".<name>
is the plugin name specified in thename
configuration parameter defined in the plugin configuration file.
The example config in google_backend.yaml.example
:
name: google
config:
client:
client_metadata:
redirect_uris: [<base_url>/<name>]
[...]
together with BASE: "https://proxy.example.com"
in proxy_conf.yaml
would
yield the redirect URI https://proxy.example.com/google
to register with Google.
A list of all claims possibly released by Google can be found here, which should be used when configuring the attribute mapping (see above).
The default configuration file can be found here.
The only parameters necessary to configure is the credentials,
the "App ID" (client_id
) and "App Secret" (client_secret
), issued by Facebook.
See the registration instructions
for information on how to obtain them.
A list of all user attributes released by Facebook can be found here, which should be used when configuring the attribute mapping (see above).
The ping frontend responds to a query with a simple 200 OK and is intended to be used as a simple heartbeat monitor, for example by a load balancer. The default configuration file can be found here.
Additional behaviour can be configured in the proxy through so called micro services. There are two different types of micro services: request micro services which are applied to the incoming request, and response micro services which are applied to the incoming response from the target provider.
The following micro services are bundled with SATOSA.
To add a set of static attributes, use the AddStaticAttributes
class which will add
pre-configured (static) attributes, see the
example configuration.
The static attributes are described as key-value pairs in the YAML file, e.g:
organisation: Example Org.
country: Sweden
where the keys are the internal attribute names defined in internal_attributes.yaml
.
Attribute values delivered from the target provider can be filtered based on a per target provider per requester basis
using the FilterAttributeValues
class. See the example configuration.
The filters are described as regular expressions in a YAML file with the following structure:
<target_provider>:
<requester>:
<attribute_name>: <regex_filter>
where the empty string (""
) can be used as a key on any level to describe a default filter.
The filters are applied such that all attribute values matched by the regular expression are preserved, while any
non-matching attribute values will be discarded.
Filter attributes from the target provider https://provider.example.com
, to only preserve values starting with the
string "foo:bar"
:
"https://provider.example.com":
"":
"": "^foo:bar"
Filter the attribute attr1
to only preserve values ending with the string "foo:bar"
:
"":
"":
"attr1": "foo:bar$"
Filter the attribute attr1
to the requester https://provider.example.com
, to only preserver values containing
the string "foo:bar"
:
"":
"https://client.example.com":
"attr1": "foo:bar"
To choose which backend (essentially choosing target provider) to use based on the requester, use the
DecideBackendByRequester
class which implements that special routing behavior. See the
example configuration.
If using the SAMLMirrorFrontend
module and some of the target providers should support some additional SP's, the
DecideIfRequesterIsAllowed
micro service can be used. It provides a rules mechanism to describe which SP's are
allowed to send requests to which IdP's. See the example configuration.
Metadata containing all SP's (any SP that might be allowed by a target IdP) must be in the metadata configured in the
SAMLMirrorFrontend
plugin config.
The rules are described using allow
and deny
directives under the rules
configuration parameter.
In the following example, the target IdP target_entity_id1
only allows requests from requester1
and requester2
.
rules:
target_entity_id1:
allow: ["requester1", "requester2"]
SP's are by default denied if the IdP has any rules associated with it (i.e, the IdP's entity id is a key in the rules
mapping).
However, if the IdP does not have any rules associated with its entity id, all SP's are by default allowed.
Deny all but one SP:
rules:
target_entity_id1:
allow: ["requester1"]
deny: ["*"]
Allow all but one SP:
rules:
target_entity_id1:
allow: ["*"]
deny: ["requester1"]
To allow account linking (multiple accounts at possibly different target providers are linked together as belonging to the same user), an external service can be used. See the example config which is intended to work with the ALService (or any other service providing the same REST API).
This micro service must be the first in the list of configured micro services in the proxy_conf.yaml
to ensure
correct functionality.
To handle user consent of released information, an external service can be used. See the example config which is intended to work with the CMService (or any other service providing the same REST API).
This micro service must be the last in the list of configured micro services in the proxy_conf.yaml
to ensure
correct functionality.
An identifier such as eduPersonPrincipalName asserted by an IdP can be used to look up a person record
in an LDAP directory to find attributes to assert about the authenticated user to the SP. The identifier
to consume from the IdP, the LDAP directory details, and the mapping of attributes found in the
directory may all be confingured on a per-SP basis. The input to use when hashing to create a
persistent NameID may also be obtained from attributes returned from the LDAP directory. To use the
LDAP microservice install the extra necessary dependencies with pip install satosa[ldap]
and then see the
example config.
It's possible to write custom plugins which can be loaded by SATOSA. They have to be contained in a Python module,
which must be importable from the one of the paths specified by CUSTOM_PLUGIN_MODULE_PATHS
in proxy_conf.yaml
.
Depending on which type of plugin it is, it has to inherit from the correct base class and implement the specified methods:
- Frontends must inherit
satosa.frontends.base.FrontendModule
. - Backends must inherit
satosa.backends.base.BackendModule
. - Request micro services must inherit
satosa.micro_services.base.RequestMicroService
. - Request micro services must inherit
satosa.micro_services.base.ResponseMicroService
.
The proxy metadata is generated based on the front-/backend plugins listed in proxy_conf.yaml
using the satosa-saml-metadata
(installed globally by SATOSA installation).
To produce signed SAML metadata for all SAML front- and backend modules, run the following command:
satosa-saml-metadata <path to proxy_conf.yaml> <path to key for signing> <path to cert for signing>
Detailed usage instructions can be viewed by running satosa-saml-metadata --help
.
The SATOSA proxy is a Python WSGI application and so may be run using any WSGI compliant web server.
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX and is the server used most often to run the proxy. In a production deployment the Gunicorn server is often proxied by a full featured general purpose web server (in a reverse proxy architecture) such as Nginx or Apache HTTP Server to help buffer slow clients and enable more sophisticated error page rendering.
Start the proxy server with the following command:
gunicorn -b<socket address> satosa.wsgi:app --keyfile=<https key> --certfile=<https cert>
where
socket address
is the socket address thatgunicorn
should bind to for incoming requests, e.g.0.0.0.0:8080
https key
is the path to the private key to use for HTTPS, e.g.pki/key.pem
https cert
is the path to the certificate to use for HTTPS, e.g.pki/cert.pem
This will use the proxy_conf.yaml
file in the working directory. If the proxy_conf.yaml
is
located somewhere else, use the environment variable SATOSA_CONFIG
to specify the path, e.g.:
set SATOSA_CONFIG=/home/user/proxy_conf.yaml