From 11cf0190bef5a676fd5972130d7d55fbe26e8fb5 Mon Sep 17 00:00:00 2001 From: Fabio Burzigotti Date: Tue, 7 Jan 2025 16:25:54 +0100 Subject: [PATCH] Update docuentation for 2.0.0.Apha release --- README.adoc | 7 ++-- docs/bom.adoc | 2 +- docs/build-containers.adoc | 16 ++++---- docs/compose.adoc | 6 +-- docs/configuration.adoc | 66 +++++++++++++----------------- docs/cop.adoc | 2 +- docs/drone.adoc | 4 +- docs/enrichers.adoc | 29 ++------------ docs/example.adoc | 2 +- docs/kubernetes.adoc | 82 +++++++++++++++++++------------------- docs/parallel.adoc | 40 ++++++++++++------- docs/polyglot.adoc | 13 +++--- docs/preliminaries.adoc | 14 ++++--- docs/reports.adoc | 33 +++++++++------ docs/requirements.adoc | 67 ++++++++++++++++--------------- docs/restassured.adoc | 32 +++++++++------ docs/what-is-this.adoc | 14 +++++-- 17 files changed, 220 insertions(+), 209 deletions(-) diff --git a/README.adoc b/README.adoc index 11f5bf6d2..3b9e005d3 100644 --- a/README.adoc +++ b/README.adoc @@ -10,9 +10,10 @@ :icons: font :toc: left -WARNING: 1.0.0.Alpha7 breaks incompatibility with previous versions in some cases. The major difference is that instead of using the _boot2docker_ keyword to refer to the auto resolved boot2docker ip in the _serverUri_ parameter, you should now used _dockerHost_. - -IMPORTANT: 1.0.0.Alpha13 changes default format from Cube to Docker Compose. In case you are using Cube format you need to update arquillian.xml with `CUBE` +WARNING: 2.0.0.Alpha1 breaks incompatibility with previous versions as _boot2docker_ and _DockerMachine_ support has +been removed. This means that support for Windows and macOS is removed from 2.x. +We're in the process of evaluating whether to add such support to the 2.x stream again, or to adopt +different solutions. In such a case we'd probably release a 1.x version, which would still support such components. ifndef::generated-doc[] To read complete documentation visit http://arquillian.org/arquillian-cube/ diff --git a/docs/bom.adoc b/docs/bom.adoc index 9e3c1cb74..4399146ba 100644 --- a/docs/bom.adoc +++ b/docs/bom.adoc @@ -1,6 +1,6 @@ == Arquillian Cube BOM - Unified Dependencies -This aims to fulfill requirements of unify naming & versions. +This aims to fulfill requirements for unifying naming & versions. === Usage diff --git a/docs/build-containers.adoc b/docs/build-containers.adoc index c4e6c6c55..38a6bcd7c 100644 --- a/docs/build-containers.adoc +++ b/docs/build-containers.adoc @@ -1,9 +1,9 @@ == Building containers -To build a container _Docker_ uses a file called +Dockerfile+ http://docs.docker.com/reference/builder/. -*Arquillian Cube* also supports building and running a container from a +Dockerfile+. +To build a container _Docker_ uses a file called `Dockerfile` http://docs.docker.com/reference/builder/. +*Arquillian Cube* also supports building and running a container from a `Dockerfile`. -To set that *Arquillian Cube* must build the container, the +image+ property must be changed to +buildImage+ and add the location of +Dockerfile+. +To set that *Arquillian Cube* must build the container, the `image` property must be changed to `buildImage` and add the location of `Dockerfile`. Let's see previous example but instead of creating a container from a predefined image, we are going to build one: @@ -23,15 +23,15 @@ Let's see previous example but instead of creating a container from a predefined portBindings: [8089/tcp, 8080/tcp] ---- -<1> +buildImage+ section is used in front of +image+. In case of both sections present in a document, +image+ section has preference over +buildImage+. -<2> +dockerfileLocation+ contains the location of +Dockerfile+ and all files required to build the container. +<1> `buildImage` section is used in front of `image`. In case of both sections present in a document, `image` section has preference over `buildImage`. +<2> `dockerfileLocation` contains the location of `Dockerfile` and all files required to build the container. <3> Property to enable or disable the no cache attribute. <4> Property to enable or disable the remove attribute. <5> Property to set the dockerfile name to be used instead of the default ones. -TIP: +dockerfileLocation+ can be a directory that must contains +Dockerfile+ in root directory (in case you don't set _dockerfileName_ property), also a +tar.gz+ file or a _URL_ pointing to a +tar.gz+ file. +TIP: `dockerfileLocation` can be a directory that must contains `Dockerfile` in root directory (in case you don't set _dockerfileName_ property), also a +tar.gz+ file or a _URL_ pointing to a +tar.gz+ file. -An example of +Dockerfile+ is: +An example of `Dockerfile` is: [source, properties] .src/test/resources/tomcat/Dockerfile @@ -43,4 +43,4 @@ ADD tomcat-users.xml /tomcat/conf/ # <1> EXPOSE 8089 CMD ["/tomcat/bin/catalina.sh","run"] ---- -<1> +tomcat-users.xml+ file is located at same directory as +Dockerfile+. +<1> +tomcat-users.xml+ file is located at same directory as `Dockerfile`. diff --git a/docs/compose.adoc b/docs/compose.adoc index f820ec11b..2eb850290 100644 --- a/docs/compose.adoc +++ b/docs/compose.adoc @@ -7,11 +7,11 @@ It is important to note that this is not a docker-compose implementation but onl In case of some specific Arquillian Cube attributes like await strategy cannot be configured and the default values are going to be used. -Moreover there are some docker-compose commands that are not implemented yet due to restrictions on docker-java library. These commands are _pid_, _log_driver_ and _security_opt_. But they will be implemented as soon as docker-java library adds their support. +Moreover, there are some docker-compose commands that are not implemented yet due to restrictions on docker-java library. These commands are _pid_, _log_driver_ and _security_opt_. But they will be implemented as soon as docker-java library adds their support. -Last thing, in case you define a command that is not implemented in Arquillian Cube, this command will be ignored (no exception will be thrown), but a log line will be printed notifying this situation. Please it is really important that if this happens you open a bug so we can add support for them. Althought this warning we will try to maintain aligned with the latest docker-compose format. +Last thing, in case you define a command that is not implemented in Arquillian Cube, this command will be ignored (no exception will be thrown), but a log line will be printed notifying this situation. Please it is really important that if this happens you open a bug so we can add support for them. Although this warning we will try to maintain aligned with the latest docker-compose format. -Let's see how you can rewrite previous HelloWorld example with Tomcat to be used using docker-compose format. +Let's see how you can rewrite previous HelloWorld example with Tomcat, using docker-compose format. First let's create a file called `envs` on root of the project which configures environment variables: diff --git a/docs/configuration.adoc b/docs/configuration.adoc index d40c3e444..97c45d79e 100644 --- a/docs/configuration.adoc +++ b/docs/configuration.adoc @@ -1,7 +1,7 @@ == Configuration -*Arquillian Cube* requires some parameters to be configured, some related with _Docker_ server and others related on the image that is being used. -Let's see valid attributes: +*Arquillian Cube* requires some parameters to be configured, some related with _Docker_ server and others related to the image that is being used. +The following table summarizes the attributes that are currently supported: [cols="2*"] |=== @@ -9,7 +9,7 @@ Let's see valid attributes: |Version of REST API provided by _Docker_ server. You should check on the _Docker_ site which version of REST API is shipped inside installed _Docker_ service. This field is not mandatory and if it's not set the default provided version from _docker-java_ will be used. |serverUri -|Uri of _Docker_ server. If the _Docker_ server is running natively on Linux then this will be an URI pointing to _localhost_ docker host but if you are using _Boot2Docker_ or a remote _Docker_ server then the URI should be changed to point to the _Docker_ remote _URI_. It can be a unix socket URI as well in case you are running _Docker_ on Linux (+unix:///var/run/docker.sock+). If the URI has `http://` or `https://` scheme, the `tlsVerify` attribute will be set by Cube to `false` or `true` respectively. Also you can read at <> about automatic resolution of serverUri parameter. Also you can use `DOCKER_HOST` java property or system environment to set this parameter. +|Uri of _Docker_ server. It can be a unix socket URI as well in case you are running _Docker_ on Linux (+unix:///var/run/docker.sock+). If the URI has `http://` or `https://` scheme, the `tlsVerify` attribute will be set by Cube to `false` or `true` respectively. You can read at <> about automatic resolution of serverUri parameter. Also, you can use `DOCKER_HOST` java property or system environment to set this parameter. |dockerRegistry |Sets the location of Docker registry. Default value is the official _Docker_ registry located at https://registry.hub.docker.com @@ -50,43 +50,39 @@ Let's see valid attributes: |certPath |Path where certificates are stored. If you are not using _https_ protocol this parameter is not required. This parameter accepts starting with ~ as home directory. -|boot2dockerPath -|Sets the full location (and program name) of _boot2docker_. For example +/opt/boot2dockerhome/boot2docker+. - -|dockerMachinePath -|Sets the full location (and program name) of _docker-machine_. For example +/opt/dockermachinehome/docker-machine+. - |machineName |Sets the machine name in case you are using docker-machine to manage your docker host. This parameter is mandatory when using docker-machine with more than one running machine. In case of having only one docker machine running, it is not necessary to set it since it is auto resolved by cube. |machineDriver -|Sets the machine driver in case you are using _docker-machine_, _Cube_ will create a machine using this driver. This parameter is mandatory when docker-machine is not installed. +|Sets the machine driver in case you are using _docker-machine_. This parameter is mandatory when docker-machine is not installed. |dockerMachineCustomPath |Sets the custom location where _docker-machine_ will be downloaded. Default value: ~/.arquillian/machine. |dockerInsideDockerResolution -|Boolean to set if Cube should detect that tests are run inside an already started Docker container, so Docker containers started by Cube could be run using DinD (Docker Inside Docker) or DoD (docker On Docker). Basically it ignores any `SERVER_URI`, `Boot2Docker` or `docker-machine` properties and sets the `serverUri` to `unix:///var/run/docker.sock`. By default its value is true. If you want to use an external dockerhost, then you need to set this property to false. +|Boolean to set if Cube should detect that tests are run inside an already started Docker container, so Docker containers started by Cube could be run using DinD (Docker Inside Docker) or DoD (docker On Docker). Basically it ignores any `SERVER_URI` properties and sets the `serverUri` to `unix:///var/run/docker.sock`. By default, its value is set to true. If you want to use an external dockerhost, then you need to set this property to false. |clean -|Sometimes you might left some container running inside your docker host with the same name as one defined for Cube test. At these cases Arquillian Cube (actually Docker) complains of a conflict of trying to create a container name that it is already running. If you want that Cube automatically removes these containers you can set this property to true. By default is false. +|Sometimes you might have left some container running inside your docker host with the same name as one defined for Cube test. When this happens, Arquillian Cube (actually Docker) complains about a conflict, i.e. trying to create a container using a name that exists already. If you want for Cube to automatically remove such containers, then you can set this property to true. By default, it is set to false. |removeVolumes -|Boolean to set if Cube should also remove the volumes associated with a container when removing the container. By default is true. Can be overwritten on container level. +|Boolean, whether Cube should also remove the volumes associated with a container when removing the container. By default, +it is set to true. Can be overwritten on container level. |cleanBuildImage -|Boolean to set if you set to true all images built by cube are removed and if false no built images are removed. If image is not built by cube it should not be removed. By default is true. +|Boolean to set if you set to true all images built by cube are removed and if false no built images are removed. If image is not built by cube it should not be removed. By default, it is set true. |connectionMode -|Connection Mode to bypass the Create/Start Cube commands if the a Docker Container with the same name is already running on the target system. This parameter can receive three possible values. _STARTANDSTOP_ which is the default one if not set any and simply creates and stops all Docker Containers. If a container is already running, an exception is thrown. _STARTORCONNECT_ mode tries to bypass the Create/Start Cube commands if a container with the same name is already running, and if it is the case doesn’t stop it at the end. But if container is not already running, Cube will start one and stop it at the end of the execution. And last mode is _STARTORCONNECTANDLEAVE_ which is exactly the same of _STARTORCONNECT_ but if container is started by Cube it won’t be stopped at the end of the execution so it can be reused in next executions. *This is a Cube property, not a Docker one*, thus it should be inside a tag. See link:#allow-connecting-to-a-running-container[here] for an example. +|Connection Mode to bypass the Create/Start Cube commands if the a Docker Container with the same name is already running on the target system. This parameter can receive three possible values. _STARTANDSTOP_ which is the default one if not set any and simply creates and stops all Docker Containers. If a container is already running, an exception is thrown. _STARTORCONNECT_ mode tries to bypass the Create/Start Cube commands if a container with the same name is already running, and if it is the case doesn’t stop it at the end. But if container is not already running, Cube will start one and stop it at the end of the execution. And last mode is _STARTORCONNECTANDLEAVE_ which is exactly the same of _STARTORCONNECT_ but if container is started by Cube it won’t be stopped at the end of the execution, so it can be reused in next executions. *This is a Cube property, not a Docker one*, thus it should be inside a tag. See link:#_allow_connecting_to_a_running_container[here] for an example. |ignoreContainersDefinition |If you set to true then Arquillian Cube will ignore definitions set in `dockerContainers`, `dockerContainersFile` and `dockerContainersFiles` as well as default locations. By default is set to false. |=== -Some of these properties can be provided by using standard Docker system environment variables so you can set once and use them in your tests too. -Moreover you can set as Java system properties (-D...) as well. +Some of these properties can be provided by using standard Docker system environment variables, so that you can set them +once and then use them in your tests too. Additionally, you can set such configuration as Java system properties (-D...) +as well. [cols="2*"] |=== @@ -176,17 +172,17 @@ tomcat: # <1> killContainer: true # <11> alias: tomcat1 # <12> ---- -<1> The name that are going to be assign to running container. It is *mandatory*. +<1> The name that is going to be assigned to the running container. It is *mandatory*. <2> The name of the image to be used. It is *mandatory*. If the image has not already been pulled by the _Docker_ server, *Arquillian Cube* will pull it for you. If you want to always pull latest image before container is created, you can configure *alwaysPull: true*. <3> Sets exposed ports of the running container. It should follow the format _port number_ slash(/) and _protocol (udp or tcp). Note that it is a list and it is not mandatory. <4> After a container is started, it starts booting up the defined services/commands. Depending on the nature of service, the lifecycle of these services are linked to start up or not. For example Tomcat, Wildlfy, TomEE and in general all Java servers must be started in foreground and this means that from the point of view of the client, the container never finishes to start. But on the other side other services like Redis are started in background and when the container is started you can be sure that Redis server is there. To avoid executing tests before the services are ready, you can set which await strategy should be used from *Arquillian Cube* side to accept that _Docker_ container and all its defined services are up and ready. It is not mandatory and by default polling with _ss_ command strategy is used. -<5> In +strategy+ you set which strategy you want to follow. Currently three strategies are supported. _static_, _native_ and _polling_. +<5> In +strategy+ you set which strategy you want to follow. Currently, three strategies are supported. _static_, _native_ and _polling_. <6> You can pass environment variables by using `env`. In this section you can set special `dockerServerIp` string which at runtime will be replaced by _Cube_ to current docker server ip. <7> After the container is up, a list of commands can be executed within it. <8> Port forwarding is configured using `portBinding` section. It contains a list of `exposedPort` and `port` separated by arrow (_->_). If only one port is provided, *Arquillian Cube* will expose the same port number. In this example the exposed port 8089 is mapped to 8089 and exposed port 8080 is mapped to 8081. <9> You can extend another configuration. Any top level element and it's children from the target container-id will be copied over to this configuration, unless they have been defined here already. <10> You can use `manual` to indicate that this container is going to be started or in the test manually using `CubeController` or started by an extension. This attribute is ingorned in case of arquillian containers (none autostart containers) or in case of a linked container that comes from a none manual container. -<11> Kills the container instead of stopping it normally. By default is false so containers are stopped. +<11> Kills the container instead of stopping it normally. By default, it is set to false, so that containers are stopped. <12> Alternate hostname for use with the builtin DNS for https://docs.docker.com/engine/userguide/networking/#user-defined-networks[docker'suser defined networks]. As we've seen in the basic example the definition of the Arquillian Cube scenarios are described in `dockerContainers` property. @@ -262,10 +258,10 @@ native:: it uses *wait* command. In this case current thread is waiting until th polling:: in this case a polling (with _ping_ or _ss_ command) is executed for 5 seconds against all exposed ports. When communication to all exposed ports is acknowledged, the container is considered to be up. This approach is the one to be used in case of services started in foreground. By default _polling_ executes _ss_ command inside the running container to know if the server is already running. Also you can use a _ping_ strategy from client by setting +type+ attribute to +ping+; Note that _ping_ only works if you are running _Docker_ daemon on +localhost+. You can also use `wait-for-it` script which is automatically downloaded, copied inside container and executed inside it. To do it you need to set `type` property to `waitforit`. In almost all cases the default behaviour matches all scenarios. If it is not specified, this is the default strategy. -By default if you use _ss_ strategy but ss command is not installed into the container it fallsback automatically to waitforit strategy. -static:: similar to _polling_ but it uses the host ip and specified list of ports provided as configuration parameter. This can be used in case of using _Boot2Docker_. +By default, if you use _ss_ strategy but ss command is not installed into the container it fallsback automatically to waitforit strategy. +static:: similar to _polling_, but it uses the host ip and specified list of ports provided as configuration parameter. sleeping:: sleeps current thread for the specified amount of time. You can specify the time in seconds or milliseconds. -log:: it looking for a specified pattern in container log to detect service startup. This can be used when there is no port to connect or connecting to the port successfully doesn't mean the service is fully initialized. +log:: looks for a specified pattern in container log to detect service startup. This can be used when there is no port to connect or connecting to the port successfully doesn't mean the service is fully initialized. http:: polls through a configured http endpoint checking for http response code and optionally the answer content or headers. docker_health:: polls the docker API to wait for the container to match the docker healthy definition (see: link:https://docs.docker.com/engine/reference/builder/#healthcheck[here]). :: if you specify a fully qualified class name, Arquillian Cube will instantiate the given class. In this way you can implement your own await strategies. There are two rules to follow, the first one is that class must implement `AwaitStrategy` and the second one is that one default constructor must be provided. Optionally you can add fields/setters for types `Cube`, `DockerClientExecutor` or `Await` to inject them into the await strategy. @@ -359,8 +355,8 @@ tomcat: <1> Parameter to configure the pattern that signals the service returned correctly value. To use regular expression just prefix the pattern with `regexp:`. <2> Optional parameter to set which response http code is the expected one from service. Default is 200. <3> Mandatory parameter that sets the url where to connect. `dockerHost` is substituted by Cube to Docker Host. -<4> Optional parameter to configure sleeping time between each call in case of fail. You can set in seconds using _s_ or miliseconds using _ms_. By default time unit is miliseconds and value 500. -<5> Optional parameter to configure number of retries to be done. By default 10 iterations are done. +<4> Optional parameter to configure sleeping time between each call in case of fail. You can set in seconds using _s_ or miliseconds using _ms_. By default, time unit is set to milliseconds, and value to 500. +<5> Optional parameter to configure number of retries to be done. By default, 10 iterations are executed. <6> Optional parameter to check header's value returned by service. [source, yaml] @@ -376,7 +372,7 @@ tomcat: command: ["curl", "localhost:8089"] # <3> ---- <1> Optional parameter to configure number of retries to be done. By default 10 iterations are done. -<2> Optional parameter to configure sleeping time between each call in case of fail. You can set in seconds using _s_ or miliseconds using _ms_. By default time unit is miliseconds and value 500. +<2> Optional parameter to configure sleeping time between each call in case of fail. You can set in seconds using _s_ or miliseconds using _ms_. By default, time unit is set to milliseconds, and value is set to 500. <3> Optional parameter to configure a command line to execute inside the container instead of using the docker API to get container health. Custom Await strategy: @@ -428,7 +424,7 @@ For example in case of Tomcat, exposed port is opened when the application is de To avoid this problem and continue using default `await` strategy you can annotate your test class with `@HealthCheck` annotation. -By default annotating your test class with it, next default parameters are used: +By annotating your test class with it, the following default parameters are used: ---- context: / @@ -451,11 +447,11 @@ TIP: If `containerName` is set to null `port` attribute is used, otherwise `port ==== `@Sleep` annotation Sometimes you need to sleep your execution for some specific amount of time and you have no way to do it using an http health check. -In this situations a sleep might do the work. +In these situations, a sleep might do the work. To avoid this problem and continue using default `await` strategy you can annotate your test class with `@Sleep` annotation which receives as value an string that represents a timespan. -By default the time specified is in milliseconds so annotating the test class with `@Sleep("1000")` makes your test class sleeps 1 second before executing all test methods. +By default, the time specified is in milliseconds so annotating the test class with `@Sleep("1000")` makes your test class sleeps 1 second before executing all test methods. You can also use the timespan format and write something like `@Sleep("1m30s")` which makes your test class sleeps for one minute and a half before executing all test methods. === Inferring exposedPorts from portBinding @@ -469,8 +465,8 @@ For this reason in *Arquillian Cube* you can use +portBinding+ and it will autom In next example we are only setting +portBinding+ and *Arquillian Cube* will instruct _Docker_ to expose port 8080 and of course bind the port 8080 so it can be accessible from outside. -[source, xml] -.arquillian.xml +[source, yaml] +.arquillian.xml (fragment) ---- daytime: buildImage: @@ -555,7 +551,7 @@ the custom beforeStop action In case of +log+ command the standard output and the error output are returned. -+log+ _Docker_ command can receive some configuration paramters and you can set them too in configuration file. ++log+ _Docker_ command can receive some configuration paramters, and you can set them too in configuration file. [source, yaml] .Example of log parameters @@ -641,12 +637,6 @@ This parameter is not mandatory and in case you don't set it, _Arquillian Cube_ |Linux |unix:///var/run/docker.sock -|Windows -|tcp://dockerHost:2376 - -|MacOS -|tcp://dockerHost:2376 - |Docker Machine |tcp://dockerHost:2376 |=== diff --git a/docs/cop.adoc b/docs/cop.adoc index dfd829876..fde4342e2 100644 --- a/docs/cop.adoc +++ b/docs/cop.adoc @@ -158,7 +158,7 @@ public class PingPongContainer { As part of Arquillian Cube, we are providing a `org.arquillian.cube.impl.shrinkwrap.asset.CacheUrlAsset` asset. This asset is similar to `org.jboss.shrinkwrap.api.asset.UrlAsset`, but the former caches to disk for an amount of time the content that has been downloaded from the URL. -By default this expiration time is 1 hour, and it is configurable by using the proper constructor. +By default, this expiration time is 1 hour, and it is configurable by using the proper constructor. ==== Links diff --git a/docs/drone.adoc b/docs/drone.adoc index 5e88bcb0a..6d34f36f8 100644 --- a/docs/drone.adoc +++ b/docs/drone.adoc @@ -167,7 +167,7 @@ But in case of using *Standalone* mode, since it doesn't know anything from depl ---- <1> Base URL of WebDriver -The problem is that in case of using Docker Cube (and more specifically docker-machine/boot2docker) is that probably you don't know the docker host at configuration time but in runtime. +The problem is that in case of using Docker Cube is that probably you don't know the docker host at configuration time but in runtime. And this is where Docker Cube can help you when using *Standalone* mode. ==== URL configuration in Standalone mode @@ -231,4 +231,4 @@ Apart from adding `arquillian`, `arquillian-drone`, `selenium-bom` and `arquilli You can see the same example we used in Drone but using Graphene at https://github.com/arquillian/arquillian-cube/tree/main/docker/ftest-graphene -Also you can learn about Graphene at http://arquillian.org/guides/functional_testing_using_graphene/ +Also, you can learn about Graphene at http://arquillian.org/guides/functional_testing_using_graphene/ diff --git a/docs/enrichers.adoc b/docs/enrichers.adoc index bc5b3adb3..409bc08a5 100644 --- a/docs/enrichers.adoc +++ b/docs/enrichers.adoc @@ -113,8 +113,8 @@ URL will be `http://:/` If you are running your tests on your continuous integration/delivery server (for example on Jenkins or GitLab runners) and at the same time the server is running inside Docker. Then the docker containers started for Cube are run inside a Docker container. So you effectively face the Docker inside Docker problem - DockerHost is **not** the machine where your test is running. -From Arquillian Cube perspective we cannot do a lot of things, more then adapting to this situation by changing the `serverUri`. -Basically it ignores any `SERVER_URI`, `Boot2Docker` or `docker-machine` properties and sets the `serverUri` to `unix:///var/run/docker.sock`. +From Arquillian Cube perspective we cannot do a lot of things, more than adapting to this situation by changing the `serverUri`. +Basically it ignores any `SERVER_URI`, properties and sets the `serverUri` to `unix:///var/run/docker.sock`. You can avoid this behaviour by setting `dockerInsideDockerResolution` to false. @@ -330,7 +330,7 @@ NOTE: Automapping only works in case you want to change the default server port === DockerServerIp and Containers -If you are using a remote docker server (not on _localhost_) or for example _boot2docker_ you may want to set that ip to Arquillian remote adapter configuration so it can deploy the archive under test. +If you are using a remote docker server (not on _localhost_) you may want to set that ip to Arquillian remote adapter configuration so it can deploy the archive under test. In these cases you can hardcode this ip to Arquillian container adapter configuration or you can use the special tag +dockerServerIp+. At runtime these tag will be replaced by _Arquillian Cube_ to docker server ip configured in +serverUri+ parameter. This replacement only works in properties that contains the string +host+ or +address+ in property name. @@ -357,28 +357,7 @@ So for example: The +host+ property will be replaced automatically to +192.168.0.2+. -NOTE: This also works in case you set +serverUri+ using +boot2docker+ special word or by using the defaults. Read more about it <> and <>. - -In case of using _unix_ socket +dockerServerUri+ is replaced to _localhost_. - -Also _Arquillian Cube_ can help you in another way inferring +boot2docker+ ip. -In case you are running in _MACOS_ or _Windows_ with +boot2docker+, you may not need to set host property at all nor using +dockerServerIp+. -_Arquillian Cube_ will inspect any property in configuration class that contains the word _address_ or _host_ that it is not overriden in `arquillian.xml` and it will set the +boot2docker+ server automatically. - -So previous example could be modified to: - -[source.xml] -.arquillian.xml ----- - - configuration> - admin - mypass - - ----- - -And in case you are running on _Windows_ or _MacOS_, `host`property will be automatically set to the +boot2docker +_ip_. +In case of using _unix_ socket +dockerServerUri+ is replaced by _localhost_. === System Properties Injection diff --git a/docs/example.adoc b/docs/example.adoc index 8f8914740..de8d0e05c 100644 --- a/docs/example.adoc +++ b/docs/example.adoc @@ -116,7 +116,7 @@ And finally we need to configure _Tomcat_ remote adapter and *Arquillian Cube* i ---- <1> *Arquillian Cube* extension is registered. <2> _Docker_ server version is required. -<3> _Docker_ server URI is required. In case you are using a remote _Docker_ host or _Boot2Docker_ here you need to set the remote host ip, but in this case _Docker_ server is on same machine. +<3> _Docker_ server URI is required. <4> A _Docker_ container contains a lot of parameters that can be configured. To avoid having to create one XML property for each one, a YAML content can be embedded directly as property. <5> Configuration of _Tomcat_ remote adapter. Cube will start the _Docker_ container when it is ran in the same context as an _Arquillian_ container with the same name. <6> Host can be _localhost_ because there is a port forwarding between container and _Docker_ server. diff --git a/docs/kubernetes.adoc b/docs/kubernetes.adoc index b4ae9ea81..1858ba625 100644 --- a/docs/kubernetes.adoc +++ b/docs/kubernetes.adoc @@ -6,13 +6,13 @@ The kubernetes extension helps you write and run integration tests for your Kube This extension will create and manage one temporary namespace for your tests, apply all Kubernetes resources required to create your environment and once everything is ready it will run your tests. The tests will be enriched with resources -required to access services. Finally when testing is over it will cleanup everything. +required to access services. Finally, when testing is over it will clean everything up. -In addition to the main testing namespace additional secondary namespaces could be used during testing. Cube +In addition to the main testing namespace, additional secondary namespaces could be used during testing. Arquillian Cube would not modify them, but tests could be enriched with resources from secondary namespaces to access services in them in case you need to verify changes made by services you are testing. -This extension will neither mutate your containers *(by deploying, reconfiguring etc)* nor your Kubernetes resources +This extension will neither mutate your containers *(by deploying, reconfiguring, etc.)* nor your Kubernetes resources and takes a black box approach to testing. === Modules @@ -25,7 +25,7 @@ The main modules of this extension are the following: === Features - Hybrid *(in or out of Kubernetes/Openshift)* - Advanced namespace management -- Dependency management *(for maven based projects)* +- Dependency management *(for Maven based projects)* - Auto align with Docker Registry - Enrichers for: ** Kubernetes/Openshift client @@ -37,8 +37,8 @@ The main modules of this extension are the following: - "Bring your own client" support === Pre-requisites -- To use kubernetes extension, your host should have running kubernetes cluster. -- To use openshift extension, your host should have running openshift cluster. +- To use kubernetes extension, your host should provide a running kubernetes cluster. +- To use openshift extension, your host should provide a running openshift cluster. === Setup @@ -48,11 +48,11 @@ To use OpenShift extension you need to register next dependency in your build to === Configuring the extension -The plugin can be configured using the traditional arquillian.xml, via system properties or environment variables +The plugin can be configured using the traditional `arquillian.xml`, via system properties or environment variables (in that particular order). -Which means that for every supported configuration parameter, the arquillian.xml will be looked up first, if it doesn't -contain an entry, the system properties will be used. -If no result has been found so far the environment variables will be used. +This means that for every supported configuration parameter, the `arquillian.xml` file will be looked up first, and if +it doesn't contain an entry, then the system properties will be used. +If no result has been found so far, the environment variables will be used. **Note:** When checking for environment variables, property names will get capitalized, and symbols like "." will be converted to "_". @@ -60,7 +60,7 @@ For example **foo.bar.baz** will be converted to **FOO_BAR_BAZ**. ==== Kubernetes Configuration Parameters -You can configure Kubernetes by using any of the next configuration properties in `arquillian.xml`. +You can configure Kubernetes by using any of the following configuration properties in `arquillian.xml`. [source, xml] .src/test/resources/arquillian.xml @@ -124,7 +124,8 @@ will also be captured if this flag is enabled. Filenames will end with `-KUBE_EV ==== Openshift Configuration Parameters When using OpenShift you can use `arquillian.xml` to configure ANY of the configuration properties introduced at <> mixed with some specific configuration parameters related to OpenShift. -In cas of using OpenShift, then you need to use `openshift` qualifier instead of `kubernetes`, but as noticed in previous paragraph you can use it to set any Kubernetes configuration parameters as well. +For OpenShift, you need to use the `openshift` qualifier instead of `kubernetes`, but as noticed in previous paragraph +you can use it to set any Kubernetes configuration parameters as well. [source, xml] .src/test/resources/arquillian.xml @@ -154,39 +155,34 @@ the route must respond successfully to be considered available; useful in enviro ==== Openshift DNS Naming Service -The OpenShift module provides a easy way to run tests against your public application's route. -The Arquillian Naming Service allows you to run tests annotated with @RunsAsClient without have to add the routes -manually to your /etc/hosts to make its name resolvable. The arquillian Cube generates a custom namespaces prefix +The OpenShift module provides an easy way to run tests against your public application's route. +The Arquillian Naming Service allows you to run tests annotated with @RunsAsClient without adding the routes +manually to your /etc/hosts to make the host name resolvable. The arquillian Cube generates a custom namespaces prefix that will be used to define the application's route when running your tests against an OpenShift instance, even if you specify a namespace manually it will be transparent and the application's endpoint will be resolvable within your java tests. -To use it, you need to setup your tests to use the ArquillianNameService, you can either configure it inside your test -or by setting a System properties. - -Configuring inside a test class: +To use it, you need to set up your tests to use the ArquillianNameService, which you must install via the +`INameService.install(new ArqCubeNameService())` call, as in the following example: [source, java] .SomethingCoolTest.java ---- @Before -public void prepareEnv(){ - System.setProperty("sun.net.spi.nameservice.provider.1", "dns,ArquillianCubeNameService"); - System.setProperty("sun.net.spi.nameservice.provider.2","default"); +public void prepareEnv() throws NoSuchFieldException, ClassNotFoundException, IllegalAccessException { + INameService.install(new ArqCubeNameService()); } ---- -Or just setting the following System Properties: -`-Dsun.net.spi.nameservice.provider.1=dns,ArquillianCubeNameService -Dsun.net.spi.nameservice.provider.2=default` - ==== OpenShift Annotations -OpenShift extension comes with some annotations that let you define resources at test level instead of globally. +The OpenShift extension comes with some annotations that let you define resources at the test level rather than globally. ===== `@Template` -A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift. +A template describes a set of objects that can be parameterized and processed to produce a list of resources for +creation by OpenShift. -You can set template location as configuration parameter or using `@Template` annotation at class level. +You can set a template location viaa a configuration parameter or using `@Template` annotation at class level. Here's a small example: [source, java] @@ -194,14 +190,13 @@ Here's a small example: include::../openshift/ftest-template-standalone/src/test/java/org/arquillian/cube/openshift/standalone/HelloWorldTemplateIT.java[tag=openshift_template_example] ---- -However in `@Template`, url can be set using `url = https://git.io/vNRQm` or using +However, in `@Template`, url can be set using `url = https://git.io/vNRQm` or using `url = "classpath:hello-openshift.yaml"` if template is on Test class path. ===== `@OpenShiftResource` You can apply OpenShift resources files before test execution. -These resources creation are meant to be used for resources that aren't tied to a living thing. -Examples of these are service accounts, credentials, routes, ... +Usually, this is a suitable way for creating non-first-citizen resources, like service accounts, credentials, routes, etc. The value can either be: @@ -209,7 +204,9 @@ The value can either be: * test classpath resource (classpath:some.json) * or plain content ({"kind" : "Secret", ...} -You can use `@OpenShiftResource` either at class level which implies that resource is created before test class execution and they are deleted after class execution or at method level which implies that resources are created and deleted after each test method execution. +You can use `@OpenShiftResource` either at class level - which implies that the resource is created before test class execution +and then deleted after the same test class execution - or at method level, which implies that resources are created and +deleted after each test method execution. You can see an example of OpenShift resources usage at https://github.com/arquillian/arquillian-cube/blob/master/openshift/ftest-openshift-resources-standalone/src/test/java/org/arquillian/cube/openshift/standalone/HelloWorldOpenShiftResourcesIT.java @@ -258,19 +255,19 @@ Before the suite is started and destroyed in the end. For debugging purposes, you can set the **namespace.cleanup.enabled** and **namespace.destroy.enabled** to false and keep the testing namespace around. -In other cases you may find it useful to manually create and manage the environment rather than having **arquillian** +In other cases you may find it useful to manually create and manage the environment rather than having **Arquillian** do that for you. In this case you can use the **namespace.use.existing** option to select an existing testing namespace. This option goes hand in hand with **env.init.enabled** which can be used to prevent the extension from modifying the environment. -Last but not least, you can just tell arquillian, that you are going to use the current namespace as testing namespace. +Last but not least, you can just tell *Arquillian* that you are going to use the current namespace as testing namespace. In this case, arquillian cube will delegate to https://github.com/fabric8io/kubernetes-client/[Kubernetes Client] that in turn will use: - `~/.kube/config` - `/var/run/secrets/kubernetes.io/serviceaccount/namespace` -- the `KUBERNETES_NAMESPACE` environmnet variable +- the `KUBERNETES_NAMESPACE` environment variable to determine the current testing namespace. @@ -303,17 +300,18 @@ properties can be set in `arquillian.xml` as shown in the snippet below or as sy username password +---- ==== -### Creating the environment +=== Creating the environment After creating or selecting an existing namespace, the next step is the environment preparation. This is the stage where all the required Kubernetes configuration will be applied. -#### How to run kubernetes with multiple configuration files? +==== How to run kubernetes with multiple configuration files? 1. Out of the box, the extension will use the classpath and try to find a resource named **kubernetes.json** or -**kubernetes.yaml***. The name of the resource can be changed using the **env.config.resource.name**. -Of course it is also possible to specify an external resource by URL using the **env.config.url**. +**kubernetes.yaml**. The name of the resource can be changed using the **env.config.resource.name**. +Of course, it is also possible to specify an external resource by URL using the **env.config.url**. 2. While finding resource in classpath with property **env.config.resource.name**, cube will look into classpath with given name, if not found, then cube will continue to look into classpath under META-INF/fabric8/ directory. @@ -421,7 +419,7 @@ Here's a small example: } ---- -The test code above, demonstrates how you can inject an use inside your test the *KubernetesClient* and the +The test code above, demonstrates how you can inject and use inside your test the *KubernetesClient* and the *Session* object. It also demonstrates the use of **kubernetes-assertions** which is a nice little library based on http://assertj.org[assert4j] for performing assertions on top of the Kubernetes model. @@ -711,14 +709,14 @@ https://github.com/arquillian/arquillian-cube/tree/master/openshift/ftest-opensh Also, you can learn more about Graphene at http://arquillian.org/guides/functional_testing_using_graphene/ . -=== OpenShift Integration with Rest-Assured +=== OpenShift Integration with RestAssured Integration with Rest-Assured allows for auto-resolution of the base URI of the application deployed within the OpenShift cluster by using `OpenShift Route` definition for configuring Rest-Assured. ==== Configuration -You can configure a specific base URI using the `baseUri` property from restassured configuration. In this case, the +You can configure a specific base URI using the `baseUri` property from Restassured configuration. In this case, the hostname is going to be resolved as OpenShift route name, and if there is no route with that name, then the base URI is treated as is. For example: diff --git a/docs/parallel.adoc b/docs/parallel.adoc index fb2b8521b..ace94b8df 100644 --- a/docs/parallel.adoc +++ b/docs/parallel.adoc @@ -1,19 +1,21 @@ == Parallel Execution -Sometimes using any Mave/Gradle/Jenkins plugin you end up by executing tests in parallel. -This means that same `docker-compose` file is executed for all tests. -The problem is that probably _docker host_ is the same. -So when you start the second test (in parallel) you will get a failure regarding that a container with same name is already started in that docker host. +Sometimes using any Maven/Gradle/Jenkins plugin you end up by executing tests in parallel. +This means that the same `docker-compose` file is executed for all tests. +The problem is that probably _docker host_ is the same, too. +So when you start the second test (in parallel) you will get a failure regarding that a container with same name is +already started in that docker host. -So arrived at this point you can do two things: +In this situation you can do two things: -* You can have one _docker host_ for each parallel test and override `serverUri` property at each case using system property (arq.extension.docker.serverUri). +* You can have one _docker host_ for each parallel test and override `serverUri` property at each case, using system +property (arq.extension.docker.serverUri). * You can reuse the same _docker host_ and use _star operator_. === Star Operator -Star operator let's you indicate to Arquillian Cube that you want to generate cube names randomly and adapt links as well. +Star operator lets you indicate to Arquillian Cube that you want to generate *cube* names randomly and adapt links as well. In this way when you execute your tests in parallel there will be no conflicts because of names or binding ports. Let's see an example: @@ -38,17 +40,17 @@ Let's see an example: ---- -You add the special character `*` to indicate that cube name should be randomized. +You add the special character `*` to indicate that *cube* name should be randomized. With previous example Cube will: . Generate a unique _UUID_. -. Substitute cube name, links and depends_on using the name + UUID +. Substitute *cube* name, links and depends_on using the name + UUID . In case of using an alias, don't add the `*` since it is extended from the service name . Bind port is going to be changed to a random private port (49152 - 65535) . Add an environment variable with new hostname. This environment variable is _HOSTNAME -So for example the result file could look like: +So for example the resulting file could look like this: [source, yml] .arquillian.xml @@ -71,11 +73,17 @@ So for example the result file could look like: ---- -Since now the ports are unique and names are unique, you can run tests using same orchestration in parallel against same docker host. +Since now the ports are unique and names are unique, you can run tests using the same orchestration in parallel against +the same docker host. -The same approach can work for ensuring that each test run has a unique network. As docker allows multiple networks to have the same name, it will not throw an error if two concurrent tests create networks with the same name. However, as cube networks are specified by name in the cube specification, if there are multiple networks with the same name, the cube could end up in any one of them, resulting in hard to debug test failures. +The same approach can work for ensuring that each test run has a unique network. As docker allows multiple networks to +have the same name, it will not throw an error if two concurrent tests create networks with the same name. +However, as *cube* networks are specified by name in the cube specification, if there are multiple networks with the +same name, the *cube* could end up in any one of them, resulting in hard-to-debug test failures. -Again, adding the special character `*` to the end of the network name will cause a random network name to be used. The name can then be used for a cube's networkMode or in the cube's network list, and it will be substituted correctly when the test runs. +Again, adding the special character `*` to the end of the network name will cause a random network name to be used. +The name can then be used for a *cube*'s networkMode or in the *cube*'s network list, and it will be substituted +correctly when the test runs. [source, yml] .arquillian.xml @@ -97,8 +105,10 @@ Again, adding the special character `*` to the end of the network name will caus ---- -NOTE: You can use the same approach for _docker-compose_ files not only with _cube_ format. But then your _docker-compose_ will be tight to Arquillian Cube. The best approach if you want to use docker-compose format is using `extends`. +NOTE: You can use the same approach for _docker-compose_ files, not only with _CUBE_ format. +But then your _docker-compose_ will be tight to Arquillian Cube. +The best approach if you want to use docker-compose format is using `extends`. -NOTE: Star operator must also used on enrichers for example: +NOTE: Star operator can also be used on enrichers, for example: `@HostPort(containerName = "tomcat*", value = 8080)` or `@DockerUrl(containerName = "tomcat*", exposedPort = 8080)` diff --git a/docs/polyglot.adoc b/docs/polyglot.adoc index 64a0a9e5d..c58ff9114 100644 --- a/docs/polyglot.adoc +++ b/docs/polyglot.adoc @@ -5,7 +5,7 @@ But if you think clearly there is nothing that avoid *Arquillian Cube* to deploy Let's see an example on how you can use *Arquillian Cube* to test a _Node.js_ _hello world_ application. -First thing to do is create the _Node.js_ application. +The first thing to do is to create the _Node.js_ application. [source, json] .src/main/js/package.json @@ -53,9 +53,9 @@ EXPOSE 8080 CMD [ "npm", "start" ] ---- -<1> We need to use +ADD+ command adding the deployed file instead of +COPY+. We are going to see why below. +<1> We need to use the +ADD+ command adding the deployed file instead of +COPY+. We are going to see why below. -Finally the +arquillian.xml+ configuration file. +Finally, the +arquillian.xml+ configuration file: [source, xml] .arquillian.xml @@ -92,7 +92,9 @@ Finally the +arquillian.xml+ configuration file. ---- <1> This property is used to set which container must be started. In this case +node+. -IMPORTANT: If containerless definition only contains only one image, it is not necessary to use _containerlessDocker_ property. At the same time if the image only exposes one port, it is not necessary to use _embeddedPort_ proeprty to set the port. So in previous example you could avoid using _containerlessDocker_ and _embeddedPort_. +IMPORTANT: If _containerless_ definition only contains one image, it is not necessary to use _containerlessDocker_ property. +Similarly, if the image only exposes one port, it is not necessary to use _embeddedPort_ proeprty to set the port. +So in previous example you could avoid using _containerlessDocker_ and _embeddedPort_. And finally the *Arquillian* test. @@ -127,4 +129,5 @@ public class NodeTest { <2> +GenericArchive+ with +tar+ extension must be created using _Shrinkwrap_. <3> Simple test. -NOTE: +GenericArchive+ must end with +tar+ extension because it is expected by *Arquillian Cube*. When you use +ADD+ in +Dockerfile+, _Docker_ will untar automatically the file to given location. +NOTE: +GenericArchive+ must end with +tar+ extension because it is expected by *Arquillian Cube*. +When you use +ADD+ in +Dockerfile+, _Docker_ will _untar_ automatically the file to given location. diff --git a/docs/preliminaries.adoc b/docs/preliminaries.adoc index f3642d9b4..7d976067b 100644 --- a/docs/preliminaries.adoc +++ b/docs/preliminaries.adoc @@ -2,12 +2,14 @@ *Arquillian Cube* relies on https://github.com/docker-java/docker-java[docker-java] API. -To use *Arquillian Cube* you need a _Docker_ daemon running on a computer (it can be local or not), but probably it will be at local. +To use *Arquillian Cube* you need a _Docker_ daemon running on a computer (it can be local or not), but probably it will +be at local. -By default the _Docker_ server uses UNIX sockets for communicating with the _Docker_ client. *Arquillian Cube* will attempt to detect the operating system it is running on and either set _docker-java_ to use UNIX socket on _Linux_ or to <> on _Windows_/_Mac_ as the default URI. +By default, the _Docker_ server uses UNIX sockets for communicating with the _Docker_ client. +*Arquillian Cube* will attempt to set _docker-java_ to use UNIX socket on _Linux_. -Further in case of Linux, if you want to use TCP/IP to connect to the Docker server, you'll need to make sure that your -_Docker_ server is listening on TCP port. To allow _Docker_ server to use TCP set the _Docker daemon options_, the exact +If you want to use TCP/IP to connect to the Docker server, you'll need to make sure that your +_Docker_ server is listening on TCP port. To allow _Docker_ server to use TCP, set the _Docker daemon options_, the exact process for which varies by the way you launch the Docker daemon and/or the underlying OS: * systemd (Ubuntu, Debian, RHEL 7, CentOS 7, Fedora, Archlinux) — edit docker.service and change the ExecStart value @@ -26,8 +28,8 @@ This will create the necessary directory structure under `/etc/systemd/system/do ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock + -It was necessary to clear out ExecStart using `ExecStart=` before setting it to the override value. This is only -required for some options and most options in the configuration file would not need to be cleared like this. Using +In the above example, ExecStart is cleared out - using `ExecStart=` - before setting it to the override value. This is only +required for some options, while most of them in the configuration file would not need to be cleared like this. Using systemctl edit also ensures that the override settings are loaded. * upstart (Ubuntu 14.04 and older versions) — set DOCKER_OPTS in `/etc/default/docker` diff --git a/docs/reports.adoc b/docs/reports.adoc index c11f38e9c..158e469b2 100644 --- a/docs/reports.adoc +++ b/docs/reports.adoc @@ -4,7 +4,8 @@ Arquillian Reporter (https://github.com/arquillian/arquillian-reporter) project Check Arquillian Reporter website to see the kind of reports you can generate and how to configure it. -Arquillian Cube integrates with Arquillian Reporter to provide in these reports some information about Cube environment. +Arquillian Cube integrates with Arquillian Reporter to use the reports and provide some information about the Cube +environment. To integrate Cube with Reporter, you only need to add arquillian reporter's depchain dependency: @@ -22,14 +23,17 @@ To integrate Cube with Reporter, you only need to add arquillian reporter's depc After that all *cubes* information will be added in the report. Cubes are elements that are deployed into a system, for example a Pod or a Docker container. -For example in case of a Docker Cube it will report start and stop duration time, if it has failed or not and some container properties like ports, links, image name, entrypoint, network .... +For example, in the case of a Docker Cube start and stop duration time wil be reported, whether the container execution +failed or not, and some container properties like ports, links, image name, entrypoint, network, etc. === Arquillian Cube Docker Reporter -In previous section you've read that by just adding reporter dependency, you get integration between cube and reporter and some information about cube (for example a docker container) is reported. +In the previous section you've read that by just adding reporter dependency, you'll get integration between Cube and +Reporter, and some information about *cubes* (for example a docker container) is reported. -But sometimes you need more information about the system and not each cube individually. -For this reason there is a docker cube reporter integration that adds on the report information specific to docker environment like the composition used during deployment or docker host information. +But sometimes you need more information about the system, rather than on individual *cubes*. +For this reason a Docker Cube-Reporter integration exists, that adds specific information to the report about +environment peculiarities, like the composition used during deployment or the docker host information. For this reason if you add next dependency too: @@ -43,24 +47,29 @@ For this reason if you add next dependency too: ---- -Information about docker host and an schema of docker compositions will be added in the report. +Information about the docker host and docker compositions will be added to the report. === Arquillian Cube Docker Drone Integration -In <> you've read that you can execute web UI tests inside a docker container which contains the browser. -Also an screencast is recorded so you can review lately what has happened inside the container. +In <> you've read that you can execute web UI tests inside a docker container, +which contains the browser. +Also, a screencast is recorded, so that you can review lately what has happened inside the container. -If you add previous dependency `arquillian-cube-docker-reporter` and `arquillian-reporter-depchain` in a cube docker drone project, then the report will contain the screencasts as well in the report, so you can play from the report the recordings as well. +If you add the aforementioned dependencies, i.e. `arquillian-cube-docker-reporter` and `arquillian-reporter-depchain` to +a Cube Docker Drone project, then the report will contain the screencasts as well, for you to display them directly +within the report the recordings. === Arquillian Cube Docker RestAssured Integration -If you add `arquillian-cube-docker-reporter` and `arquillian-reporter-depchain` in a cube docker RestAssured project, then the report will contain the request and response logs for all test methods. +If you add `arquillian-cube-docker-reporter` and `arquillian-reporter-depchain` in a Cube Docker RestAssured project, +then the report will contain the request and response logs for all test methods. === Arquillian Cube Kubernetes Reporter -There is a arquillian cube kubernetes reporter integration that give report information about resources configuration and kubernetes session i.e. Namespace, Master URL, Replication Controllers, Pods, Services etc. +There is an Arquillian Cube Kubernetes Reporter integration that provides report data about resources configuration +and the Kubernetes session i.e. Namespace, Master URL, Replication Controllers, Pods, Services etc. -For this reason you have to add following dependency: +For this integration to work the following dependencies must be added: [source, xml] .pom.xml diff --git a/docs/requirements.adoc b/docs/requirements.adoc index 54cd7aa0d..5e1a2eb69 100644 --- a/docs/requirements.adoc +++ b/docs/requirements.adoc @@ -1,25 +1,24 @@ == Requirements Module -Arquillian Cube tries to adapt to docker installation you have. -For example if you are running Docker in linux machine, Docker Cube is going to try to connect there. -If it is in MacOS or Windows will try to use docker-machine, and if there is only one machine defined, it will use that one. +Arquillian Cube tries to connect to the local Docker installation, or to a Kubernetes or OpenShift cluster. -But sometimes this automatic resolutions are not possible, for example you have more than one docker machine installed/started, or you don't know if the user is going to have one docker machine installed/started or more than one. -In these cases you need to set using `machineName` property which docker machine you want to use. +Sometimes the test environment does not support one of the above-mentioned targets, and some tests may fail, e.g.: a +test requiring OpenShift 4 would fail if the test environment does not support providing an OpenShift 4 cluster. -The problem is that you don't know if the environment were test is running will have this machine or not. -If it doesn't have then an exception will be thrown regarding cannot connect to given docker machine and test will fail. - -Obviously this is not a failure, but test should be ignored. -For this use case Arquillian Cube and other ones, Cube implements the *requirements* module. -This module makes skip tests instead of failing in case of not meeting some environment expectations. +Obviously this is not a test failure, and test should rather be ignored. +For this use case and similar ones, Arquillian Cube provides the *requirements* module. +This module makes it easy to skip tests, instead of failing, in the case any environment expectations are not met. === Example of environment requirements -With Requirements you can set if you want to skip tests if some environment or property variables are not set. -This is useful for example if you require that `DOCKER_HOST`, `DOCKER_TLS_VERIFY` or `DOCKER_CERT_PATH` system or environment variable must be set. +By using _Requirements_, you can set if you want to skip tests based on variable criteria, like missing environment +variables or system properties. +This is useful for example if you require that `DOCKER_HOST`, `DOCKER_TLS_VERIFY` or `DOCKER_CERT_PATH` system or +environment variable must be set. +Similarly, for example in the Kubernetes use case, a requirement can prevent a test from being executed if the cluster +is not available. -Notice that Cube takes precedence these variables over configured in `arquillian.xml`. +Notice that Cube gives precedence to these variables, as to those configured in `arquillian.xml`. To use it you need to add requirements dependency. @@ -33,10 +32,11 @@ To use it you need to add requirements dependency. ---- -Then you can use an special runner or a JUnit rule. -If you use Rule, the scope of annotations are only a test class, if you use the runner then annotations can be used in a suite level. +Then you can use a special runner or a JUnit _Rule_. +If you use a _Rule_, the scope of annotations can only be a test class, while annotations can be used at the suite level +when using an _ArquillianConditionalRunner_. -Let's see how to use with Rule. +Let's see how to use with JUnit _Rule_. [source, java] ---- @@ -54,10 +54,10 @@ public class HelloWorldServletTest { } ---- -<1> Checks if it is set a system property and if not environment variable with name `DOCKER_HOST` +<1> Checks if either a system property or an environment variable with name `DOCKER_HOST` is set <2> Rule definition -But you can use the runner approach to use it suite level. +Alternatively, you can use the _ArquillianConditionalRunner_ approach to use the requirement at the suite level. [source, java] ---- @@ -73,38 +73,40 @@ public class HelloWorldServletTest { } ---- <1> Runner for requirements check -<2> Checks if it is set a system property and if not environment variable with name `DOCKER_HOST` +<2> Checks if either a system property or an environment variable with name `DOCKER_HOST` is set -Other annotations you can use are: `RequiresEnvironmentVariable` and `RequiresSystemProperty` which basically instead of checking as system property or environment variable, they only checks ones. +The `RequiresEnvironmentVariable` and `RequiresSystemProperty` annotations can be used too, which respectively check +whether _just_ an environment variable or a system property ar set. -=== Example with Docker +=== Example with OpenShift 4 -But also there is an annotation in docker module for checking the environment against docker machine. +The `openshift` module also defines an annotation for checking the environment against a given docker machine existence. [source, java] ---- import org.arquillian.cube.requirement.ArquillianConditionalRunner; -import org.arquillian.cube.docker.impl.requirement.RequiresDockerMachine; +import org.arquillian.cube.openshift.impl.requirement.RequiresOpenShift4; @RunWith(ArquillianConditionalRunner.class) -@RequiresDockerMachine(name = "dev") // <1> -public class HelloWorldServletTest { +@RequiresOpenshift4 // <1> +public class RouteInOtherNamespaceIT { //.... } ---- -<1> Docker machine requirement +<1> OpenShift 4 is required for the test to be executed -Previous test will only be executed if in the environment where test is running has docker machine installed with a machine named _dev_. +The test will only be executed if an OpenShift 4 cluster is available in the target environment === Customizing Requirements You can also implement your own requirement annotations. To do it you only need to do two things. -* An annotation annotated with `org.arquillian.cube.spi.requirement.Requires` pointing to a class which implements `org.arquillian.cube.spi.requirement.Requirement`. -* An implementation of `org.arquillian.cube.spi.requirement.Requirement` interface. +* Add an annotation interface, decorated with `org.arquillian.cube.spi.requirement.Requires` pointing to a class +which implements `org.arquillian.cube.spi.requirement.Requirement`. +* Add an implementation of the `org.arquillian.cube.spi.requirement.Requirement` interface. Let's see an example of how to implement a requirement against docker version. @@ -141,6 +143,7 @@ public class DockerRequirement implements Requirement { } } ---- -<1> In case of not meeting an expectation, `org.arquillian.cube.spi.requirement.UnsatisfiedRequirementException` should be thrown with a message. +<1> In case of not meeting an expectation, `org.arquillian.cube.spi.requirement.UnsatisfiedRequirementException` should be +thrown with a message. -After that you can use this annotation as any other requirements provided by Cube. \ No newline at end of file +After that you can use this annotation as any other requirements provided by Cube. diff --git a/docs/restassured.adoc b/docs/restassured.adoc index b4b559fce..3c5081055 100644 --- a/docs/restassured.adoc +++ b/docs/restassured.adoc @@ -9,8 +9,8 @@ An example of how to make a GET request and validate the JSON or XML response mi The problem with Rest Assured is that by default it assumes that host is _localhost_ and port _8080_. This might be perfect when not using Docker but when using docker this assumptions might not be the most typical. -For example you may use another bind port rather than _8080_, you can expose 8080 port but bind to another port. -Also you might run docker host in different ip rather than _localhost_, maybe because you are using docker machine or maybe because you are using an external docker host. +For example, you may use another bind port rather than _8080_, you can expose 8080 port but bind to another port. +Also, you might run docker host in different ip rather than _localhost_, maybe because you are using an external docker host. So in these cases you need to set in every request: @@ -40,7 +40,8 @@ Previous approach works but it has some problems: * Requires some development interference of the developer, if it is running in docker machine needs to set one _ip_ which might change in the future, or if running on native linux must be changed to _localhost_. * Any change on Rest-Assured configuration properties would make all tests fails. -To fix these problems, you can use Arquillian Cube Docker RestAssured integration which creates a `RequestSpecBuilder` with correct values set. +To fix these problems, you can use Arquillian Cube Docker RestAssured integration which creates a `RequestSpecBuilder` +with correct values set. === Configuration @@ -70,19 +71,24 @@ By default, if your scenario is not complex you don't need to configure anything |baseUri |:// -|It is the base uri used in RestAssured. You can set an specific value or not set and let extension to configure it by default using auto-resolution system. +|It is the base uri used in RestAssured. You can set a specific value or not set and let extension configure it by +default using auto-resolution system. |schema |http |Schema used in case of auto-resolution of baseUri |port -|If from all running containers there is only one binding port (notice that exposed ports are not bound if not specified), then this is the value used. If there are more than one binding port then an exception is thrown. -|Port to be used for communicating with docker host. By default this port must be the exposed port used in port binding. The extension will resolve for given exposed port which is the binding port. If it is not found then exposed port will be assumed as binding port too. For example using -p 8080:80 you need to set this property to 80 and extension will resolve to 8080. +|If from all running containers there is only one binding port (notice that exposed ports are not bound if not specified), +then this is the value used. If there are more than one binding port then an exception is thrown. +|Port to be used for communicating with docker host. By default, this port must be the exposed port used in port binding. +The extension will resolve for given exposed port which is the binding port. If it is not found then exposed port will +be assumed as binding port too. For example using -p 8080:80 you need to set this property to 80 and extension will resolve to 8080. |exclusionContainers | -|If you want to use auto-resolution of the port attribute you might want to exclude that extension searches for binding ports in some containers (for example monitoring containers). This is a CSV property to set container names of al of them. +|If you want to use auto-resolution of the port attribute you might want to exclude that extension searches for binding +ports in some containers (for example monitoring containers). This is a CSV property to set container names of al of them. |basePath | @@ -90,7 +96,8 @@ By default, if your scenario is not complex you don't need to configure anything |useRelaxedHttpsValidation | -|Configures RestAssured to use relaxed https validation. If attribute is present but with no value then it is applied to all protocols. If you put an string, only this protocol will be applied the relaxed rules. +|Configures RestAssured to use relaxed https validation. If attribute is present but with no value then it is applied to +all protocols. If you put a string, only this protocol will be applied the relaxed rules. |authenticationScheme | @@ -127,9 +134,10 @@ RequestSpecBuilder requestSpecBuilder; ==== Example -After setting the dependency and configuring the extension if required you can write your Arquillian Cube test as usually and use RestAssured without configuring it: +After setting the dependency and configuring the extension, if required you can write your Arquillian Cube test as +usually and use RestAssured without configuring it. -With next docker compose file which starts a ping pong server listening at root context: +For example, with the following docker compose file which starts a `ping-pong` server listening at root context: [source, yml] .docker-compose.yml @@ -137,10 +145,10 @@ With next docker compose file which starts a ping pong server listening at root helloworld: image: tsongpon/pingpong ports: - - "5432:8080" + - "8080:8080" ---- -You only need to do: +you only need to do: [source, java] .PingPongTest.java diff --git a/docs/what-is-this.adoc b/docs/what-is-this.adoc index 22ef8531d..456639f67 100644 --- a/docs/what-is-this.adoc +++ b/docs/what-is-this.adoc @@ -9,14 +9,22 @@ Extension is named *Cube* for two reasons: With this extension you can start a _Docker_ container with a server installed, deploy the required deployable file within it and execute _Arquillian_ tests. -The key point here is that if _Docker_ is used as deployable platform in production, your tests are executed in a the same container as it will be in production, so your tests are even more real than before. +The key point here is that if _Docker_ is used as deployable platform in production, your tests are executed in the same container as it will be in production, so your tests are even more real than before. But it also lets you start a container with every required service like database, mail server, ... and instead of stubbing or using fake objects your tests can use real servers. [WARNING] ==== This extension has been developed and tested on a Linux machine with the _Docker_ server already installed. -It works with *Boot2Docker* as well in _Windows_ and _MacOS_ machines, but some parameters like _host ip_ must be the _Boot2Docker_ server instead of _localhost_ (in case you have _Docker_ server installed inside your own machine). -One of the best resources to learn about why using _Boot2Docker_ is different from using _Docker_ in Linux can be read here http://viget.com/extend/how-to-use-docker-on-os-x-the-missing-guide +The current version is meant to fill a gap in the project maintenance, and to support execution against latest _Docker_, +_Kubernetes_ and _OpenShift_ versions, but we had to take some decisions about deprecated integration, mainly due to +capacity reasons: + +- *Arquillian Cube 2.0.0 does not support execution on _Windows_ and _macOS_ machines anymore*, since the *Boot2Docker* +and *Docker machine* integrations have been removed, due to deprecation. + +We're aware this might be an issue in some cases, and we're open to community discussion about any options that could +help mitigate the consequences, like temporarily resuming the integration in a sustaining branch. Feel free to +log issues or start discussions about this topic. ====