Skip to content

Commit

Permalink
Move docker image management and test entrypoint to Maven (#31)
Browse files Browse the repository at this point in the history
* Use an nginx server for remote jars tests.

* Moves all integration test setup logic to Maven and scripts.

The Kubernetes integration tests now always expect an image to be
pre-built, so we no longer build images with Scala code. Maven's
pre-integration-test invokes a single script to bootstrap the
environment with the built images, etc. In the transition we try to keep
as much of the same semantics as possible.

* Update documentation

* Remove unnecessary .gitignore entries

* Use $IMAGE_TAG instead of $TAG

* Don't write image tag file twice

* Remove using nginx file server

* Remove some lines

* Split building Spark for dev environment from build reactor

* Small docs fix

* Docs formatting fix

* Spark TGZ can be empty instead of N/A, throw an error if not provided.

* Remove extraneous --skip-building-docker-images flag.

* Switch back to using the N/A placeholder

* Remove extraneous code

* Remove maven args because they don't work

* Fix scripts

* Don't get Maven if it's already there

* Put quotes everywhere

* Minor formatting

* Hard set Minikube binary location.

* Run Minikube from bash -c
  • Loading branch information
mccheah authored and foxish committed Jan 16, 2018
1 parent f80d1d5 commit c651127
Show file tree
Hide file tree
Showing 22 changed files with 358 additions and 549 deletions.
9 changes: 7 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
.idea/
spark/
integration-test/target/
target/
build/*.jar
build/apache-maven*
build/scala*
build/zinc*
build/run-mvn
*.class
*.log
*.iml
*.swp
159 changes: 64 additions & 95 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,98 +8,67 @@ title: Spark on Kubernetes Integration Tests
Note that the integration test framework is currently being heavily revised and
is subject to change. Note that currently the integration tests only run with Java 8.

As shorthand to run the tests against any given cluster, you can use the `e2e/runner.sh` script.
The script assumes that you have a functioning Kubernetes cluster (1.6+) with kubectl
configured to access it. The master URL of the currently configured cluster on your
machine can be discovered as follows:

```
$ kubectl cluster-info
Kubernetes master is running at https://xyz
```

If you want to use a local [minikube](https://github.com/kubernetes/minikube) cluster,
the minimum tested version is 0.23.0, with the kube-dns addon enabled
and the recommended configuration is 3 CPUs and 4G of memory. There is also a wrapper
script for running on minikube, `e2e/e2e-minikube.sh` for testing the master branch
of the apache/spark repository in specific.

```
$ minikube start --memory 4000 --cpus 3
```

If you're using a non-local cluster, you must provide an image repository
which you have write access to, using the `-i` option, in order to store docker images
generated during the test.

Example usages of the script:

```
$ ./e2e/runner.sh -m https://xyz -i docker.io/foxish -d cloud
$ ./e2e/runner.sh -m https://xyz -i test -d minikube
$ ./e2e/runner.sh -m https://xyz -i test -r https://github.com/my-spark/spark -d minikube
$ ./e2e/runner.sh -m https://xyz -i test -r https://github.com/my-spark/spark -b my-branch -d minikube
```

# Detailed Documentation

## Running the tests using maven

Integration tests firstly require installing [Minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) on
your machine, and for the `Minikube` binary to be on your `PATH`.. Refer to the Minikube documentation for instructions
on how to install it. It is recommended to allocate at least 8 CPUs and 8GB of memory to the Minikube cluster.

Running the integration tests requires a Spark distribution package tarball that
contains Spark jars, submission clients, etc. You can download a tarball from
http://spark.apache.org/downloads.html. Or, you can create a distribution from
source code using `make-distribution.sh`. For example:

```
$ git clone [email protected]:apache/spark.git
$ cd spark
$ ./dev/make-distribution.sh --tgz \
-Phadoop-2.7 -Pkubernetes -Pkinesis-asl -Phive -Phive-thriftserver
```

The above command will create a tarball like spark-2.3.0-SNAPSHOT-bin.tgz in the
top-level dir. For more details, see the related section in
[building-spark.md](https://github.com/apache/spark/blob/master/docs/building-spark.md#building-a-runnable-distribution)


Once you prepare the tarball, the integration tests can be executed with Maven or
your IDE. Note that when running tests from an IDE, the `pre-integration-test`
phase must be run every time the Spark main code changes. When running tests
from the command line, the `pre-integration-test` phase should automatically be
invoked if the `integration-test` phase is run.

With Maven, the integration test can be run using the following command:

```
$ mvn clean integration-test \
-Dspark-distro-tgz=spark/spark-2.3.0-SNAPSHOT-bin.tgz
```

## Running against an arbitrary cluster

In order to run against any cluster, use the following:
```sh
$ mvn clean integration-test \
-Dspark-distro-tgz=spark/spark-2.3.0-SNAPSHOT-bin.tgz \
-DextraScalaTestArgs="-Dspark.kubernetes.test.master=k8s://https://<master>
## Reuse the previous Docker images
The integration tests build a number of Docker images, which takes some time.
By default, the images are built every time the tests run. You may want to skip
re-building those images during development, if the distribution package did not
change since the last run. You can pass the property
`spark.kubernetes.test.imageDockerTag` to the test process and specify the Docker
image tag that is appropriate.
Here is an example:
```
$ mvn clean integration-test \
-Dspark-distro-tgz=spark/spark-2.3.0-SNAPSHOT-bin.tgz \
-Dspark.kubernetes.test.imageDockerTag=latest
```
The simplest way to run the integration tests is to install and run Minikube, then run the following:

dev/dev-run-integration-tests.sh

The minimum tested version of Minikube is 0.23.0. The kube-dns addon must be enabled. Minikube should
run with a minimum of 3 CPUs and 4G of memory:

minikube start --cpus 3 --memory 4096

You can download Minikube [here](https://github.com/kubernetes/minikube/releases).

# Integration test customization

Configuration of the integration test runtime is done through passing different arguments to the test script. The main useful options are outlined below.

## Use a non-local cluster

To use your own cluster running in the cloud, set the following:

* `--deploy-mode cloud` to indicate that the test is connecting to a remote cluster instead of Minikube,
* `--spark-master <master-url>` - set `<master-url>` to the externally accessible Kubernetes cluster URL,
* `--image-repo <repo>` - set `<repo>` to a write-accessible Docker image repository that provides the images for your cluster. The framework assumes your local Docker client can push to this repository.

Therefore the command looks like this:

dev/dev-run-integration-tests.sh \
--deploy-mode cloud \
--spark-master https://example.com:8443/apiserver \
--image-repo docker.example.com/spark-images

## Re-using Docker Images

By default, the test framework will build new Docker images on every test execution. A unique image tag is generated,
and it is written to file at `target/imageTag.txt`. To reuse the images built in a previous run, or to use a Docker image tag
that you have built by other means already, pass the tag to the test script:

dev/dev-run-integration-tests.sh --image-tag <tag>

where if you still want to use images that were built before by the test framework:

dev/dev-run-integration-tests.sh --image-tag $(cat target/imageTag.txt)

## Customizing the Spark Source Code to Test

By default, the test framework will test the master branch of Spark from [here](https://github.com/apache/spark). You
can specify the following options to test against different source versions of Spark:

* `--spark-repo <repo>` - set `<repo>` to the git or http URI of the Spark git repository to clone
* `--spark-branch <branch>` - set `<branch>` to the branch of the repository to build.


An example:

dev/dev-run-integration-tests.sh \
--spark-repo https://github.com/apache-spark-on-k8s/spark \
--spark-branch new-feature

Additionally, you can use a pre-built Spark distribution. In this case, the repository is not cloned at all, and no
source code has to be compiled.

* `--spark-tgz <path-to-tgz>` - set `<path-to-tgz>` to point to a tarball containing the Spark distribution to test.

When the tests are cloning a repository and building it, the Spark distribution is placed in `target/spark/spark-<VERSION>.tgz`.
Reuse this tarball to save a significant amount of time if you are iterating on the development of these integration tests.
27 changes: 10 additions & 17 deletions e2e/e2e-minikube.sh → build/mvn
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
#!/bin/bash
#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
Expand All @@ -14,23 +15,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

### This script can be used to run integration tests locally on minikube.
### Requirements: minikube v0.23+ with the DNS addon enabled, and kubectl configured to point to it.
BUILD_DIR=$(dirname $0)

set -ex
MVN_RUNNER=$BUILD_DIR/run-mvn

### Basic Validation ###
if [ ! -d "integration-test" ]; then
echo "This script must be invoked from the top-level directory of the integration-tests repository"
usage
exit 1
if [ ! -f $MVN_RUNNER ];
then
curl -s --progress-bar https://raw.githubusercontent.com/apache/spark/master/build/mvn > $MVN_RUNNER
chmod +x $MVN_RUNNER
fi

# Set up config.
master=$(kubectl cluster-info | head -n 1 | grep -oE "https?://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}(:[0-9]+)?")
repo="https://github.com/apache/spark"
image_repo=test

# Run tests in minikube mode.
./e2e/runner.sh -m $master -r $repo -i $image_repo -d minikube
source $MVN_RUNNER
100 changes: 100 additions & 0 deletions dev/dev-run-integration-tests.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

TEST_ROOT_DIR=$(git rev-parse --show-toplevel)
BRANCH="master"
SPARK_REPO="https://github.com/apache/spark"
SPARK_REPO_LOCAL_DIR="$TEST_ROOT_DIR/target/spark"
DEPLOY_MODE="minikube"
IMAGE_REPO="docker.io/kubespark"
SPARK_TGZ="N/A"
IMAGE_TAG="N/A"
SPARK_MASTER=

# Parse arguments
while (( "$#" )); do
case $1 in
--spark-branch)
BRANCH="$2"
shift
;;
--spark-repo)
SPARK_REPO="$2"
shift
;;
--image-repo)
IMAGE_REPO="$2"
shift
;;
--image-tag)
IMAGE_TAG="$2"
shift
;;
--deploy-mode)
DEPLOY_MODE="$2"
shift
;;
--spark-tgz)
SPARK_TGZ="$2"
shift
;;
*)
break
;;
esac
shift
done

if [[ $SPARK_TGZ == "N/A" ]];
then
echo "Cloning $SPARK_REPO into $SPARK_REPO_LOCAL_DIR and checking out $BRANCH."

# clone spark distribution if needed.
if [ -d "$SPARK_REPO_LOCAL_DIR" ];
then
(cd $SPARK_REPO_LOCAL_DIR && git fetch origin $branch);
else
mkdir -p $SPARK_REPO_LOCAL_DIR;
git clone -b $BRANCH --single-branch $SPARK_REPO $SPARK_REPO_LOCAL_DIR;
fi
cd $SPARK_REPO_LOCAL_DIR
git checkout -B $BRANCH origin/$branch
./dev/make-distribution.sh --tgz -Phadoop-2.7 -Pkubernetes -DskipTests;
SPARK_TGZ=$(find $SPARK_REPO_LOCAL_DIR -name spark-*.tgz)
echo "Built Spark TGZ at $SPARK_TGZ".
cd -
fi

cd $TEST_ROOT_DIR

if [ -z $SPARK_MASTER ];
then
build/mvn integration-test \
-Dspark.kubernetes.test.sparkTgz=$SPARK_TGZ \
-Dspark.kubernetes.test.imageTag=$IMAGE_TAG \
-Dspark.kubernetes.test.imageRepo=$IMAGE_REPO \
-Dspark.kubernetes.test.deployMode=$DEPLOY_MODE;
else
build/mvn integration-test \
-Dspark.kubernetes.test.sparkTgz=$SPARK_TGZ \
-Dspark.kubernetes.test.imageTag=$IMAGE_TAG \
-Dspark.kubernetes.test.imageRepo=$IMAGE_REPO \
-Dspark.kubernetes.test.deployMode=$DEPLOY_MODE \
-Dspark.kubernetes.test.master=$SPARK_MASTER;
fi
39 changes: 0 additions & 39 deletions e2e/e2e-prow.sh

This file was deleted.

Loading

0 comments on commit c651127

Please sign in to comment.