Blueprint's extension ecosystem you know and love, in 🐳 Docker.
- classic-docker-compose.yml stays as close to the stock Pterodactyl compose file as possible
- This means it still has the obsolete "version" attribute, has no health checks, and does not use a .env file for configuration
- This file is simpler to look at and understand, mostly because it doesn't give you the same level of control and information at the recommended docker-compose.yml file
- docker-compose.yml (recommended) can and has been improved over time
- If you are using this version, download and configure the .env file as well; most if not all configuration can be done through the .env file
- One thing to be prepared for is that Wings uses the host system's Docker Engine through the mounted socket; it does not use Docker in Docker.
- What this means is the directory where you store your data, if you wish to customize it, must be set to the same value for both host and container in the mounts, and then you must make the values in your config.yml match; otherwise the Wings container would see one directory, then when a new container is created that isn't affected by this docker-compose.yml's mounts, it won't see the same directory. Here's an example:
- Mount in docker-compose.yml:
"${BASE_DIR}/:${BASE_DIR}/"
- Let's say, for the purposes of this example, that you set
BASE_DIR
in your .env file to /srv/pterodactyl. If you want to mount Wings server data in another location, just add any other mount, making sure both sides of the mount match. - Now when you create your node, you would select somewhere inside the mount you made for Daemon Server File Directory, e.g. /srv/pterodacty/wings/servers
- After Wings runs successfully the first time, more options will appear in your config.yml file. They will look like this:
-
root_directory: /var/lib/pterodactyl log_directory: /var/log/pterodactyl data: /srv/pterodactyl/volumes archive_directory: /var/lib/pterodactyl/archives backup_directory: /var/lib/pterodactyl/backups tmp_directory: /tmp/pterodactyl
- As you can see, only data gets set to your configured location. You can make the others match by changing /var/lib/pterodactyl to match your base directory, again for the example /srv/pterodactyl. Optionally, you can change the log location too if you'd like to keep everything possible inside one directory, which is one of the benefits of using containers. Once you're done, it may look like:
-
root_directory: /srv/pterodactyl log_directory: /srv/pterodactyl/wings/logs data: /srv/pterodactyl/volumes archive_directory: /srv/pterodactyl/archives backup_directory: /srv/pterodactyl/backups tmp_directory: /tmp/pterodactyl
- Mount in docker-compose.yml:
Extensions must be placed/dragged into the extensions
folder.
By default, you can only interact with Blueprint by going through the Docker Engine command line, i.e.
docker compose exec panel blueprint (arguments)
We recommend setting an alias so you can interact with Blueprint the same way you would in the non-Docker version (If you have your compose file in a different place, adjust accordingly:
# Set alias for current session
alias blueprint="docker compose -f /srv/pterodactyl/docker-compose.yml exec panel blueprint"
# Append to the end of your .bashrc file to make it persistent
echo 'alias blueprint="docker compose -f /srv/pterodactyl/docker-compose.yml exec panel blueprint"' >> ~/.bashrc
Here's a quick example showcasing how you would go about installing extensions on the Docker version of Blueprint. Note that your experience can differ for every extension.
- Find an extension you would like to install and look for a file with the
.blueprint
file extension. - Drag/upload the
example.blueprint
file over/onto to your extensions folder, i.e. by default/srv/pterodactyl/extensions
. - Install the extension through the Blueprint command line tool:
Alternatively, if you have applied the alias we suggested above:
docker compose exec panel blueprint -i example
blueprint -i example
So, you installed your first extension. Congratulations! Blueprint is now keeping persistent data inside the pterodactyl_app
volume, so you'll want to start backing that volume up regularly.
Why Restic? Compression, de-duplication, and incremental backups. Save on space compared to simply archiving the directory each time.
The package name is usually restic
, e.g.
Operating System | Command |
---|---|
Ubuntu / Debian / Linux Mint | sudo apt -y install restic |
Fedora | sudo dnf -y install restic |
Rocky Linux / AlmaLinux / CentOS | sudo dnf -y install epel-release && sudo dnf -y install restic |
Arch Linux | sudo pacman -S --noconfirm restic |
openSUSE | sudo zypper install -n restic |
Gentoo | sudo emerge --ask=n app-backup/restic |
mkdir -p /srv/backups/pterodactyl
restic init --repo /srv/backups/pterodactyl
cat <<'EOF' > /srv/backups/backup.sh
#!/bin/bash
docker compose -f /srv/pterodactyl/docker-compose.yml down
restic backup /var/lib/docker/volumes/pterodactyl_app/_data --repo /srv/backups/pterodactyl
docker compose -f /srv/pterodactyl/docker-compose.yml up -d
EOF
chmod +x /srv/backups/backup.sh
(crontab -l 2>/dev/null; echo "59 23 * * * /srv/backups/backup.sh") | crontab -
Well, great. I have daily backups now, and they're set to keep at most 30 backups at a time. How can I restore from one of them?
You can list snapshots with restic snapshots --repo /srv/backups/pterodactyl
You're looking for a value for ID that looks something like 46adb587
. Time will be right next to each ID, so you can see what day your backups are from.
Once you've determined which snapshot you want to restore, stop your compose stack, restore your data, and start your stack again
docker compose -f /srv/pterodactyl/docker-compose.yml down
# Clear the directory so the restoration will be clean
rm -rf /var/lib/docker/volumes/pterodactyl_app/_data
# Remember to replace "46adb587" with your actual ID of the snapshot you want to restore
restic restore 46adb587 --repo /srv/backups/pterodactyl --target /var/lib/docker/volumes/pterodactyl_app/_data
docker compose -f /srv/pterodactyl/docker-compose.yml up -d
- If you have set the alias we suggested earlier
blueprint -upgrade
- If you have not
docker compose -f /srv/pterodactyl/docker-compose.yml exec panel blueprint -upgrade
- This guide operates under the assumption that individual extension/theme authors have chosen to store any persistent data such as settings in the database. If they have not done this... there isn't any specific place extension data is meant to be stored, so the data could be anywhere. You'll need to ask them if there is any persistent data stored anywhere that you have to back up before updating.
- Go to the directory of your docker-compose.yml file
-
docker compose down -v
- The -v tells it to delete any named volumes, i.e. the app volume we use. It will not delete data from bind-mounts. This way the new image's app volume can take place.
- Change the tag in your panel's image (i.e. to upgrade from v1.11.5 to v1.11.7, you would change
ghcr.io/blueprintframework/blueprint:v1.11.5
toghcr.io/blueprintframework/blueprint:v1.11.7
. -
docker compose pull
-
docker compose up -d
- Lastly, install your extensions again. Refer to the examples.
- Blueprint will support installing multiple extensions at once in the future, making updates significantly easier. The syntax showcased was
blueprint -i extension1 extension2 extension3
. Documentation here will be updated when that comes out, but for now you'll have to install each extension again every update. Feel free to automate this with a simple bash script:- Create the script
-
cd /srv/pterodactyl && echo -e '#!/bin/bash\n\nfor extension in "$@"\ndo\n docker compose exec panel blueprint -i "$extension"\ndone' > bulk-install.sh && chmod +x bulk-install.sh
-
- The script will be located in the assumed root folder for your compose stack,
/srv/pterodactyl
. You can use it while in that folder with as many extensions as you want with:-
./bulk-install.sh extension1 extension2 extension3``
-
- Create the script