This repo demonstrates configuring the Celery task queue with Flask in the application factory pattern.
The Flask application factory pattern delays configuration until the WSGI server is started, which allows for secure, dynamic configuration files. The official Celery tutorials assume all configuration is available upon import, so this sample Flask server shows how to configure Celery in a factory pattern.
Specifically, this example provides:
- support for late binding of the Broker URL
- executing all celery tasks within an app context
This sample aims to simulate a realistic Flask server by employing Blueprints and separate files for view functions and celery task definitions.
The repo is organized as follows:
server/
is the appserver/core.py
creates the application factoryserver/controller/routes.py
defines the endpointsserver/controller/tasks.py
defines Celery tasks
entrypoint_api.py
is a cli interface for starting the Flask app in debug modeentrypoint_celery.py
is the entrypoint for the Celery workerrequirements.txt
is the list of python dependencies for pipdocker.env
defines the environment variables for the appdocker-compose.yml
defines the servicesDockerfile
is the image for the app & celery worker
The Flask app exposes an API that accepts a POST
request to /sleep/<seconds>
to start an task and return its ID. To check on the status of that task, issue a GET
request to /sleep/<task_id>
. The featured task is a dummy function that sleeps for <seconds>
time then returns a datetime.
Per the recommendations of Celery documentation, this Flask/Celery app was tested with RabbitMQ as the message broker and Redis as the results backend, although (in theory) it should accept any supported broker/backend.
This sample runs the services as Docker containers (see ./docker-compose.yml), but feel free to run locally or in the cloud if it is more convenient for your use case - just make sure you modify your URL's in the configuration file accordingly.
This Flask server accepts configuration as environment variables, which are set by default in the file ./docker.env.
Configuration:
CELERY_BROKER_URL
is the rabbitmq URLCELERY_RESULT_BACKEND
is the redis URL
You can run this example by starting starting the services with docker-compose
.
Pull and build all images:
docker-compose build
Start all the containers in the background
docker-compose up -d
To check on the state of the containers, run:
docker-compose ps
Observe the API and celery worker logs:
docker-compose logs -f api worker
Create a single 30-second sleep
task
curl -X POST http://localhost:8080/sleep/30
Above command will return a <task_id>
, which can be used to check on the status of that task:
curl -X GET localhost:8080/sleep/<task_id>
You can bring down all containers in this sample app with:
docker-compose down
To make sure they're gone, check with docker-compose ps