Ready to use, lean and highly configurable Elasticsearch container image.
- Alpine Linux 3.8
- OpenJDK JRE 8u171
- Elasticsearch 6.4.0
Note: x-pack-ml
module is forcibly disabled as it's not supported on Alpine Linux.
- In order for
bootstrap.mlockall
to work,ulimit
must be allowed to run in the container. Run with--privileged
to enable this. - Multicast discovery is no longer built-in
Ready to use node for cluster elasticsearch-default
:
docker run --name elasticsearch \
--detach \
--privileged \
--volume /path/to/data_folder:/data \
quay.io/pires/docker-elasticsearch:6.4.0
Ready to use node for cluster myclustername
:
docker run --name elasticsearch \
--detach \
--privileged \
--volume /path/to/data_folder:/data \
-e CLUSTER_NAME=myclustername \
quay.io/pires/docker-elasticsearch:6.4.0
Ready to use node for cluster elasticsearch-default
, with 8GB heap allocated to Elasticsearch:
docker run --name elasticsearch \
--detach \
--privileged \
--volume /path/to/data_folder:/data \
-e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
quay.io/pires/docker-elasticsearch:6.4.0
Ready to use node with plugins (x-pack and repository-gcs) pre installed. Already installed plugins are ignored:
docker run --name elasticsearch \
--detach \
--privileged \
--volume /path/to/data_folder:/data \
-e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
-e ES_PLUGINS_INSTALL="repository-gcs,x-pack" \
quay.io/pires/docker-elasticsearch:6.4.0
Master-only node for cluster elasticsearch-default
:
docker run --name elasticsearch \
--detach \
--privileged \
--volume /path/to/data_folder:/data \
-e NODE_DATA=false \
-e HTTP_ENABLE=false \
quay.io/pires/docker-elasticsearch:6.4.0
Data-only node for cluster elasticsearch-default
:
docker run --name elasticsearch \
--detach --volume /path/to/data_folder:/data \
--privileged \
-e NODE_MASTER=false \
-e HTTP_ENABLE=false \
quay.io/pires/docker-elasticsearch:6.4.0
Data-only node for cluster elasticsearch-default
with shard allocation awareness:
docker run --name elasticsearch \
--detach --volume /path/to/data_folder:/data \
--volume /etc/hostname:/dockerhost \
--privileged \
-e NODE_MASTER=false \
-e HTTP_ENABLE=false \
-e SHARD_ALLOCATION_AWARENESS=dockerhostname \
-e SHARD_ALLOCATION_AWARENESS_ATTR="/dockerhost" \
quay.io/pires/docker-elasticsearch:6.4.0
Client-only node for cluster elasticsearch-default
:
docker run --name elasticsearch \
--detach \
--privileged \
--volume /path/to/data_folder:/data \
-e NODE_MASTER=false \
-e NODE_DATA=false \
quay.io/pires/docker-elasticsearch:6.4.0
I also make available special images and instructions for AWS EC2 and Kubernetes.
This image can be configured by means of environment variables, that one can set on a Deployment
.
- CLUSTER_NAME
- NODE_NAME
- NODE_MASTER
- NODE_DATA
- NETWORK_HOST
- HTTP_ENABLE
- HTTP_CORS_ENABLE
- HTTP_CORS_ALLOW_ORIGIN
- NUMBER_OF_MASTERS
- MAX_LOCAL_STORAGE_NODES
- ES_JAVA_OPTS
- ES_PLUGINS_INSTALL - comma separated list of Elasticsearch plugins to be installed. Example:
ES_PLUGINS_INSTALL="repository-gcs,x-pack"
- SHARD_ALLOCATION_AWARENESS
- SHARD_ALLOCATION_AWARENESS_ATTR
- MEMORY_LOCK - memory locking control - enable to prevent swap (default =
true
) . - REPO_LOCATIONS - list of registered repository locations. For example
"/backup"
(default =[]
). The value of REPO_LOCATIONS is automatically wrapped within an[]
and therefore should not be included in the variable declaration. To specify multiple repository locations simply specify a comma separated string for example"/backup", "/backup2"
. - PROCESSORS - allow elasticsearch to optimize for the actual number of available cpus (must be an integer - default = 1)
Mount a shared folder (for example via NFS) to /backup
and make sure the elasticsearch
user
has write access. Then, set the REPO_LOCATIONS
environment variable to "/backup"
and create
a backup repository:
backup_repository.json
:
{
"type": "fs",
"settings": {
"location": "/backup",
"compress": true
}
}
curl -XPOST http://<container_ip>:9200/_snapshot/nas_repository -d @backup_repository.json`
Now, you can take snapshots using:
curl -f -XPUT "http://<container_ip>:9200/_snapshot/nas_repository/snapshot_`date --utc +%Y_%m_%dt%H_%M`?wait_for_completion=true"