Experimenter is a platform for managing experiments in Mozilla Firefox.
Check out the 🌩 Nimbus Documentation Hub or go to the repository that house those docs.
Link | Prod | Staging | Local Dev (Default) |
---|---|---|---|
Legacy Home | experimenter.services.mozilla.com | stage.experimenter.nonprod.dataops.mozgcp.net | https://localhost |
Nimbus Home | /nimbus | /nimbus | /nimbus |
Nimbus REST API | /api/v6/experiments/ | /api/v6/experiments/ | /api/v6/experiments/ |
GQL Playground | /api/v5/nimbus-api-graphql | /api/v5/nimbus-api-graphql | /api/v5/nimbus-api-graphql |
Storybook | Storybook Directory | https://localhost:3001 | |
Remote Settings | settings-writer.prod.mozaws.net/v1/admin | settings-writer.stage.mozaws.net/v1/admin | http://localhost:8888/v1/admin |
- Install docker on your machine
- On linux, setup docker to run as non-root
-
Clone the repo
git clone <your fork>
-
Copy the sample env file
cp .env.sample .env
-
Set DEBUG=True for local development
vi .env
-
Create a new secret key and put it in .env
make secretkey
-
Run tests
make check
-
Setup the database
make refresh
-
Run a dev instance
make up
-
Navigate to it and add an SSL exception to your browser
https://localhost/
One might choose the semi dockerized approach for:
- faster startup/teardown time (not having to rebuild/start/stop containers)
- better ide integration
Notes:
-
Node ^14.0.0 is required
-
Pre reqs (macOs instructions)
brew install postgresql llvm openssl yarn echo 'export PATH="/usr/local/opt/llvm/bin:$PATH"' >> ~/.bash_profile export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/opt/openssl/lib/
-
Install dependencies
source .env poetry install (cd into app) yarn install
-
env values
.env (set at root): DEBUG=True DB_HOST=localhost HOSTNAME=localhost
-
Start postgresql, redis, autograph, kinto
make up_db
-
Django app
# in app poetry shell yarn workspace @experimenter/nimbus-ui build yarn workspace @experimenter/core build ./manage.py runserver 0.0.0.0:7001
Pro-tip: we have had at least one large code refactor. You can ignore specific large commits when blaming by setting the Git config's ignoreRevsFile
to .git-blame-ignore-revs
:
git config blame.ignoreRevsFile .git-blame-ignore-revs
On certain pages an API endpoint is called to receive experiment analysis data from Jetstream to display visualization tables. To see experiment visualization data, you must provide GCP credentials.
- Generate a GCP private key file.
- Ask in #experimenter for the GCP link to create a new key file.
- Add Key > Create New Key > JSON > save this file.
- Do not lose or share this file. It's unique to you and you'll only get it once.
- Rename the file to
google-credentials.json
and place it anywhere inside the/app
directory. - Update your
.env
so thatGOOGLE_APPLICATION_CREDENTIALS
points to this file. If your file is inside the/app
directory it would look like this:GOOGLE_APPLICATION_CREDENTIALS=/app/google-credentials.json
Experimenter uses docker for all development, testing, and deployment.
Build the application container by executing the build script
Build the supporting services (nginx, postgresql) defined in the compose file
Create dummy SSL certs to use the dev server over a locally secure connection. This helps test client behaviour with a secure connection. This task is run automatically when needed.
Stop and delete all docker containers. WARNING: this will remove your database and all data. Use this to reset your dev environment.
Apply all django migrations to the database. This must be run after removing database volumes before starting a dev instance.
Populates locales and countries in the database from the Firefox Product Details package
Populates the database with dummy experiments of all types/statuses using the test factories
Run kill, migrate, load_locales_countries load_dummy_experiments. Useful for resetting your dev environment when switching branches or after package updates.
Start a dev server listening on port 80 using the Django runserver. It is useful to run make refresh
first to ensure your database is up to date with the latest migrations and test data.
Start postgresql, redis, autograph, kinto on their respective ports to allow running the Django runserver and yarn watchers locally (non containerized)
Start Django runserver, Celery worker, postgresql, redis, autograph, kinto on their respective ports to allow running the yarn watchers locally (non containerized)
Start all containers in the background (not attached to shell). They can be stopped using make kill
.
Pull in the latest Kinto Docker image. Kinto is not automatically updated when new versions are available, so this command can be used occasionally to stay in sync.
Run all test and lint suites, this is run in CI on all PRs and deploys.
Run only the python test suite.
Start a bash shell inside the container. This lets you interact with the containerized filesystem and run Django management commands.
You can run the entire python test suite without coverage using the Django test runner:
./manage.py test
For faster performance you can run all tests in parallel:
./manage.py test --parallel
You can run only the tests in a certain module by specifying its Python import path:
./manage.py test experimenter.experiments.tests.api.v5.test_serializers
For more details on running Django tests refer to the Django test documentation
To debug a test, you can use ipdb by placing this snippet anywhere in your code, such as within a test method or inside some application logic:
import ipdb
ipdb.set_trace()
Then invoke the test using its full path:
./manage.py test experimenter.some_module.tests.some_test_file.SomeTestClass.test_some_thing
And you will enter an interactive iPython shell at the point where you placed the ipdb snippet, allowing you to introspect variables and call methods
For coverage you can use pytest, which will run all the python tests and track their coverage, but it is slower than using the Django test runner:
pytest --cov --cov-report term-missing
You can also enter a Python shell to import and interact with code directly, for example:
./manage.py shell
And then you can import and execute arbitrary code:
from experimenter.experiments.models import NimbusExperiment
from experimenter.experiments.tests.factories import NimbusExperimentFactory
from experimenter.kinto.tasks import nimbus_push_experiment_to_kinto
experiment = NimbusExperimentFactory.create_with_status(NimbusExperiment.Status.DRAFT, name="Look at me, I'm Mr Experiment")
nimbus_push_experiment_to_kinto(experiment.id)
You can also interact with the yarn commands, such as checking TypeScript for Nimbus UI:
yarn workspace @experimenter/nimbus-ui lint:tsc
Or the test suite for Nimbus UI:
yarn workspace @experimenter/nimbus-ui test:cov
For a full reference of all the common commands that can be run inside the container, refer to this section of the Makefile
Run the integration test suite for experimenter inside a containerized instance of Firefox. You must also be already running a make up
dev instance in another shell to run the integration tests.
Run the integration test suite for nimbus inside a containerized instance of Firefox. You must also be already running a make up
dev instance in another shell to run the integration tests.
First start a prod instance of Experimenter with:
make refresh&&make up_prod_detached
Then start the VNC service:
make integration_vnc_up
Then open your VNC client (Safari does this on OSX or just use VNC Viewer) and open vnc://localhost:5900
with password secret
. Right click on the desktop and select Applications > Shell > Bash
and enter:
cd app
sudo mkdir -m 0777 tests/integration/.tox/logs
tox -c tests/integration/
This should run the integration tests and watch them run in a Firefox instance you can watch and interact with.
An example using PYTEST_ARGS to run one test.
make integration_test_legacy PYTEST_ARGS="-k test_addon_rollout_experiment_e2e"
In development you may wish to approve or reject changes to experiments as if they were on Remote Settings. You can do so here: http://localhost:8888/v1/admin/
There are three accounts you can log into Kinto with depending on what you want to do:
admin
/admin
- This account has permission to view and edit all of the collections.experimenter
/experimenter
- This account is used by Experimenter to push its changes to Remote Settings and mark them for review.review
/review
- This account should generally be used by developers testing the workflow, it can be used to approve/reject changes pushed from Experimenter.
The admin
and review
credentials are hard-coded here, and the experimenter
credentials can be found or updated in your .env
file under KINTO_USER
and KINTO_PASS
.
Any change in remote settings requires two accounts:
- One to make changes and request a review
- One to review and approve/reject those changes
Any of the accounts above can be used for any of those two roles, but your local Experimenter will be configured to make its changes through the experimenter
account, so that account can't also be used to approve/reject those changes, hence the existence of the review
account.
For more detailed information on the Remote Settings integration please see the Kinto module documentation.
This project uses Storybook as a tool for building and demoing user interface components in React.
For most test runs in CircleCI, a static build of Storybook for the relevant commit is published to a website on the Google Cloud Platform using mozilla-fxa/storybook-gcp-publisher. Refer to that tool's github repository for more details.
You can find the Storybook build associated with a given commit on Github via the "storybooks: pull request" details link accessible via clicking the green checkmark next to the commit title.
The Google Cloud Platform project dashboard for the website can be found here, if you've been given access:
For quick reference, here are a few CircleCI environment variables used by storybook-gcp-publisher that are relevant to FxA operations in CircleCI. Occasionally they may need maintenance or replacement - e.g. in case of a security incident involving another tool that exposes variables.
-
STORYBOOKS_GITHUB_TOKEN
- personal access token on GitHub for use in posting status check updates -
STORYBOOKS_GCP_BUCKET
- name of the GCP bucket to which Storybook builds will be uploaded -
STORYBOOKS_GCP_PROJECT_ID
- the ID of the GCP project to which the bucket belongs -
STORYBOOKS_GCP_CLIENT_EMAIL
- client email address from GCP credentials with access to the bucket -
STORYBOOKS_GCP_PRIVATE_KEY_BASE64
- the private key from GCP credentials, encoded with base64 to accomodate linebreaks
Experimenter has two front-end UIs:
core
is the legacy UI used for Experimenter intake which will remain untilnimbus-ui
supersedes itnimbus-ui
is the Nimbus Console UI for Experimenter that is actively being developed
Learn more about the organization of these UIs here.
Also see the nimbus-ui README for relevent Nimbus documentation.
API documentation can be found here
Please see our Contributing Guidelines
Experimenter uses the Mozilla Public License