-
Notifications
You must be signed in to change notification settings - Fork 10
v1.x How To
- How to use human-friendly CLI
- How to process run report
- How to start Web UI
- How to create data items on storage
- How to create data items of fixed size
- How to create data items of random size
- How to create compressible data items
- How to read data items back
- How to verify data items during read
- How to read data items randomly
- How to do infinite load with data item list of finite size
- How to limit run duration
- How to limit number of data items to be created/read/etc
- How to create, read, update, append, and delete data items using Amazon S3 API
- How to create, read, update, append, and delete data items using Atmos API
- How to create, read, update, append, and delete data items using OpenStack Swift API
- How to run Mongoose in parallel mode
- How to run Mongoose in distributed mode
- How to assign Mongoose load servers to storage nodes in distributed mode
- How to make Mongoose sleep between operations
- How to limit Mongoose rate
- How to resume terminated Mongoose run
- How to run chain scenario
- How to run ramp up scenario
- How to deal with data item size distribution
- How to use Mongoose as a library
- How to use storage mock
- How to write new data items filled with zero bytes
- How to write new data items filled with equal bytes
- How to write new data items filled with text from Rikki-Tikki-Tavi tale by R. Kipling
- How to write new data items filled with custom data from an external file
- How to create a lot of buckets concurrently
- How to read a lot of buckets concurrently
- How to delete a lot of buckets concurrently
- How to perform a load over the Swift containers instead of buckets
- How to create the objects in the specific subdirectory on the storage side
- How to write N files to the specified directory
- How to create N subdirectories into the specified directory
- How to add custom HTTP headers to the requests generated
- How to disable the console output coloring
- How to write the items with names in the sequential ascending order
- How to write the items with names in the sequential descending order
- How to write the items with decimal names starting from 1000000 to 9999999
- How to write the items with names having a prefix and a binary random number
- How to generate custom HTTP headers with dynamic values
- How to write the files using the variable path
How to use human-friendly CLI
Mongoose is a complex tool. Users who don't need its advanced functions can use very simple human-friendly CLI.
Execute the following command to list all the supported options: $ java -jar /mongoose.jar -h
Currently the command output looks like the following: usage: Mongoose -b,--bucket Bucket to write data to -c,--count Count of objects to write -d,--delete Perform object delete -h,--help Displays this message -i,--ip Comma-separated list of ip addresses to write to -l,--length Size of the object to write -o,--use-deployment-output Use deployment output -r,--read Perform object read -s,--secret Secret -t,--threads Number of parallel threads -u,--user User -w,--write Perform object write -z,--run-id Sets run id
Number of parallel threads is the number of "green" threads or de facto the number of active connections.
How to process run report
Run report is a set of files Mongoose produces in a directory /log/. Starting with Mongoose 0.8, all the key files (data.items.csv, perf.avg.csv, perf.trace.csv, and perf.sum.csv) are produced in pure CSV format. You can use any mature tool that supports CSV format to open and process report components.
As an example, suppose we had a Mongoose run that produced 10 data items of random size and we would like to calculate total size of the generated content. You can easily get the result by opening data.items.csv in MS Excel and selecting the third column with data item sizes. The total size can be found on a status bar as a Sum value.
How to start Web UI
Start Mongoose in Web UI mode:
$ java -jar <path to jar>/mongoose.jar webui
Open a browser on your PC and open Mongoose Web UI page. Web UI address is <HW client IP>:8080. For instance, for Mongoose started locally the address would be http://localhost:8080/.
How to create data items on storage
In order to write to a storage you need to
Specify the list of storage's front-end nodes via storage.addrs configuration parameter
Specify S3 bucket via api.type.s3.bucket configuration parameter
Specify user and password via auth.id and auth.secret configuration parameters
The following commands start a new Mongoose run: $ export JAVA_TOOL_OPTIONS="-Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dapi.type.s3.bucket=my-bucket -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts writing to the storage specified (create is the default operation).
The list of data items produced can be found in data.items.csv in the run's log directory. How to create data items of fixed size
Use data.size configuration parameter to specify data item size: $ export JAVA_TOOL_OPTIONS="-Ddata.size=10MB -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts writing 10MB data items. Create is the default operation in Mongoose. Amazon S3 is the default storage API.
All the data items will be created in the bucket my-bucket. If there’s no a bucket with this name, Mongoose creates one.
The list of data items produced can be found in data.items.csv.
The size units supported are B (default), KB, MB, GB, TB, and EB.
You can find some additional information in the comments section below. How to create data items of random size
Use data.size.min configuration parameter to specify minimal data item size and data.size.max configuration parameter to specify maximal data item size: $ export JAVA_TOOL_OPTIONS="-Ddata.size.min=1MB -Ddata.size.max=10MB -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts writing data items with size between 1MB and 10MB. create is the default operation in Mongoose. Amazon S3 is the default storage API.
All the data items will be created in the bucket my-bucket. If there’s no a bucket with this name, Mongoose creates one.
The list of data items produced can be found in data.items.csv.
You can find some additional information in the comments section below. How to create compressible data items
Use combination of data.size and data.buffer.ring.size configuration parameters to make Mongoose create compressible data items: $ export JAVA_TOOL_OPTIONS="-Ddata.size=4MB -Ddata.buffer.ring.size=32506 -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts writing data items that can be compressed by GZip utility with the ratio 45:1 (the ratio between uncompressed size and the size after compression).
Compression ratio heavily depends on the compression algorithm used. All the ratio information above is valid for GZip utility only. How to read data items back
Use scenario.type.single.load configuration parameter to make Mongoose read and data.src.fpath configuration parameter to point to the list of data items to be read: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=read -Ddata.src.fpath=/data.items.csv -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts reading data items back.
Do not forget to specify correct bucket to read from. How to verify data items during read
Actually, data item verification is by default on. See How to read data items back section.
To disable verification set load.type.read.verifyContent configuration parameter to false. For example: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=read -Ddata.src.fpath=/data.items.csv -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar How to read data items randomly
Use data.src.random configuration parameter to make Mongoose randomly read data items listed in the file specified using data.src.fpath configuration parameter: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=read -Ddata.src.fpath=/data.items.csv -Ddata.src.random=true -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts random reading of data items from the file specified.
Note: data.src.random configuration parameter can be used for all operation types (update, delete, etc.). How to do infinite load with data item list of finite size
Use load.circular configuration parameter to make Mongoose iterate over data items listed in endless cycle. $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=<create|read|update|append> -Ditem.src.file=/items.csv -Dload.circular=true -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts infinite items processing from the file specified.
load.circular configuration parameter can be also used for update and append operations.
circular update and append are available since v1.2.0
How to limit run duration
Use load.limit.time configuration parameter to make Mongoose stop after specified time interval: $ export JAVA_TOOL_OPTIONS="-Dload.limit.time=30.seconds -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts creating data items. The run will end in 30 seconds. The time units supported are seconds (default), minutes, hours, and days.
Starting from Mongoose 0.7.0 you can also use a short notation: 600s, 180m, 10h, or 1d.
You can find some additional information in the comments section below. How to limit number of data items to be created/read/etc
Use load.limit.count configuration parameter to make Mongoose stop after specified number of data items is processed: $ export JAVA_TOOL_OPTIONS="-Dload.limit.count=10 -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose starts creating data items. The run will end after 10 data items are created.
You can find some additional information in the comments section below. How to create, read, update, append, and delete data items using Amazon S3 API
Amazon S3 is the default API for Mongoose. The commands below create 10 data items in S3 bucket my-bucket (create is the default operation). $ export JAVA_TOOL_OPTIONS="-Dload.limit.count=10 -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines read 10 data items back: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=read -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
Please note that there's no load.limit.count and data.src.fpath configuration parameters. Mongoose reads the list of data items from the S3 bucket specified.
The following lines update 10 data items: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=update -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines append 10 data items: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=append -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines delete 10 data items from the storage: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=delete -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar How to create, read, update, append, and delete data items using Atmos API
For Atmos you need to specify the API to use via api.name configuration parameter. The commands below create 10 data items in Atmos subtenant my-subtenant (create is the default operation). $ export JAVA_TOOL_OPTIONS="-Dload.limit.count=10 -Dapi.name=atmos -Dapi.type.atmos.subtenant=my-subtenant -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines read 10 data items back: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=read -Ddata.src.fpath=/data.items.csv -Dapi.name=atmos -Dapi.type.atmos.subtenant=my-subtenant -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
Please note that there's data.src.fpath configuration parameter specified. Mongoose cannot list content of Atmos subtenants.
The following lines update 10 data items: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=update -Ddata.src.fpath=/data.items.csv -Dapi.name=atmos -Dapi.type.atmos.subtenant=my-subtenant -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines append 10 data items: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=append -Ddata.src.fpath=/data.items.csv -Dapi.name=atmos -Dapi.type.atmos.subtenant=my-subtenant -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines delete 10 data items from the storage: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=delete -Ddata.src.fpath=/data.items.csv -Dapi.name=atmos -Dapi.type.atmos.subtenant=my-subtenant -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar How to create, read, update, append, and delete data items using OpenStack Swift API
OpenStack Swift API support is available with Mongoose 0.7.0. For OpenStack Swift you need to specify the API to use via api.name configuration parameter. The commands below create 10 data items in Swift container my-container (create is the default operation). $ export JAVA_TOOL_OPTIONS="-Dload.limit.count=10 -Dapi.name=swift -Dapi.type.swift.container=my-container -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines read 10 data items back: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=read -Dapi.name=swift -Dapi.type.swift.container=my-container -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
Please note that there's no load.limit.count and data.src.fpath configuration parameters. Mongoose reads the list of data items from the Swift container specified.
The following lines update 10 data items: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=update -Dapi.name=swift -Dapi.type.swift.container=my-container -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines append 10 data items: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=append -Dapi.name=swift -Dapi.type.swift.container=my-container -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
The following lines delete 10 data items from the storage: $ export JAVA_TOOL_OPTIONS="-Dscenario.type.single.load=delete -Dapi.name=swift -Dapi.type.swift.container=my-container -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar How to run Mongoose in parallel mode
By default, Mongoose opens only one connection per storage node. Use load.connections configuration parameter to make Mongoose manage several active connections per storage node: $ export JAVA_TOOL_OPTIONS="-Dload.connections=10 -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose opens 10 connections per storage node, 30 connections in total.
Note: old parameter load.threads is still supported. Starting with Mongoose 1.0 it means the number of active connections, not the number of native threads.
You can find some additional information in the comments section below. How to run Mongoose in distributed mode
By default, Mongoose runs in standalone mode, i.e. a single Mongoose instance independently loads a storage. When massive load needs to be produced start Mongoose in distributed mode. In distributed mode, there must be 2 or more load servers, Mongoose instances that produce load, and one client, that coordinates servers. When Mongoose is used for performance testing, each Mongoose instance must run on it's own HW client. In other words, it is not recommended to have a server and a client share one HW client. It is also important to have "good" network between a client and servers. Just to be clear, a server can be used by one client only.
Execute the command below to start a server: $ java -jar /mongoose.jar server
As a result, server starts. It will be idle until a client starts. Note that a server starts without any parameters. A server gets all configuration from its client.
If some host has several network interfaces configured, Java RMI server implementation may bind Mongoose service to a wrong interface. You can explicitly specify the network interface to use via native Java's java.rmi.server.hostname configuration parameter. For example: $ java -Djava.rmi.server.hostname=10.77.4.xx -jar mongoose.jar server
Start more servers.
Now execute the following commands to start a client: $ export JAVA_TOOL_OPTIONS="-Dload.server.addrs=10.64.84.aaa,10.64.84.bbb,10.64.84.ccc -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar client
As a result, the entire farm of Mongoose servers starts producing load (create). Configuration parameters of a client are parameters of a standalone Mongoose with one exception. The line contains load.server.addrs configuration parameter. This parameter lists IP addresses of load servers to use.
When a client stops, servers stop as well. Servers become idle again. They can be reused by another client. How to assign Mongoose load servers to storage nodes in distributed mode
By default, Mongoose in distributed mode uses each its server to load all storage nodes. Use load.server.assignTo.node configuration parameter to make Mongoose automatically assign each server to a subset of storage nodes: $ export JAVA_TOOL_OPTIONS="-Dload.server.addrs=10.64.84.aaa,10.64.84.bbb,10.64.84.ccc -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dload.server.assignTo.node=true -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar client
There are 3 servers and 3 storage nodes. As a result, Mongoose assigns each its server to a single storage node.
In case there are 3 servers and 9 storage nodes, Mongoose assigns each its server to 3 storage nodes (9 / 3 = 3).
In case there are 9 servers and 3 storage nodes, Mongoose creates 3 groups of servers (9 / 3 = 3). All servers from one group load a single storage node. How to make Mongoose sleep between operations
By default, Mongoose performs operations with the max speed possible. Use load.limit.reqSleepMilliSec configuration parameter to make Mongoose sleep between two successive operations: $ export JAVA_TOOL_OPTIONS="-Dload.limit.reqSleepMilliSec=1000 -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, Mongoose will sleep 1 second (1000 millis) after each create operation within each active connection. Please note that, in case of load.limit.reqSleepMilliSec parameter use, total load produced by Mongoose depends on number of active connections. Three in the example above because there are three storage addresses listed.
See next section to get to know about another way to slow down Mongoose. How to limit Mongoose rate
By default, Mongoose performs operations with the max speed possible. Use load.limit.rate configuration parameter to make Mongoose run with speed you need: $ export JAVA_TOOL_OPTIONS="-Dload.limit.rate=10 -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
As a result, all Mongoose connections together will generate about 10 data item create requests per second.
Note that this configuration parameter is applied to each active load. For instance, if Mongoose does an asynchronous chain of operations (see the section about chain scenario), then the rate limit is applied to each group of connections associated with a particular operation.
Please note that Mongoose uses an adaptive algorithm to control the rate. Thus, there's some learning phase at the beginning of each run. After the learning is over the actual rate must be close to the limit set. However, some minor deviations are possible.
See previous section to get to know about another way to slow down Mongoose. How to resume terminated Mongoose run
After some run has been terminated it can be resumed. To do so specify the same run.id and enable run resume by setting run.resume.enabled configuration parameter to value true. $ export JAVA_TOOL_OPTIONS="-Drun.id=2015.08.24.11.23.03.158 -Drun.resume.enabled=true -Dapi.type.s3.bucket=my-bucket -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
After that Mongoose will report that "Run "2015.08.24.11.23.03.158" was resumed" and resume terminated run.
The resumed run provides correct reporting WRT average performance metrics and counting.
The resumed run picks up all the limitations of the terminated run. For instance, if some run has been terminated after 5 minutes and you specify 30 minutes time limit for the resumed run, the resumed run will stop after 25 minutes (30 - 5 = 25).
The resumed run can adopt new configuration parameters’ values. Specify configuration parameters of the terminated run if you do not want to change anything.
Note that only single scenario run in standalone mode can be resumed. Chain and rump-up scenarios and distributed mode are not supported. How to run chain scenario
Mongoose supports three scenarios: single, chain, and ramp up. Single is the default scenario. For scenario single Mongoose performs one operation during entire run (by default create operation). In chain scenario, Mongoose performs a user-specified chain of operations. The simplest way to run chain scenario is to execute the following commands: $ export JAVA_TOOL_OPTIONS="-Dscenario.name=chain -Dload.limit.count=10 -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dapi.type.s3.bucket=my-bucket -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
scenario.name configuration parameters defines a scenario to run. By default chain scenario runs the <create,update,append,read,delete> chain of operations for each data item.
Please note that you need to limit duration of each operation using load.limit.count or load.limit.time configuration parameters. Otherwise, with default parameters each operation will never ends.
Use scenario.type.chain.load configuration parameter to override the operations list: $ export JAVA_TOOL_OPTIONS="-Dscenario.name=chain -Dscenario.type.chain.load=create,update,update,update,update,read -Dload.limit.count=10 -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dapi.type.s3.bucket=my-bucket -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
During this run Mongoose creates objects, updates them four times, and reads back. How to run ramp up scenario
Mongoose supports three scenarios: single, chain, and ramp up. Single is the default scenario. For scenario single Mongoose performs one operation during entire run (by default create operation). In ramp up scenario, user specifies two sets, one for data item size (N values) and one for number of loaders to be run in parallel (M values). Mongoose loads data storage using each possible pair of values [item size, loaders number]. The total number of pairs is N X M. The simplest way to run ramp up scenario is to execute the following commands: $ export JAVA_TOOL_OPTIONS="-Dscenario.name=rampup -Dload.limit.count=10 -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dapi.type.s3.bucket=my-bucket -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
scenario.name configuration parameters defines a scenario to run. Default parameters of ramp up scenario are the following. Number of connections: 1,10,100. Size of data items: 1KB,10KB,100KB.
Please note that you need to limit duration of each operation using load.limit.count or load.limit.time configuration parameters. Otherwise, with default parameters each pair will be tested forever.
Use scenario.type.rampup.connCounts and scenario.type.rampup.sizes configuration parameters to override the default ramp up parameters: $ export JAVA_TOOL_OPTIONS="-Dscenario.name=rampup -Dscenario.type.rampup.connCounts=1,10 -Dscenario.type.rampup.sizes=1,10 -Dscenario.type.chain.load=create,read -Dload.limit.count=100 -Dstorage.addrs=10.64.84.xxx,10.64.84.yyy,10.64.84.zzz -Dapi.type.s3.bucket=my-bucket -Dauth.id=[email protected] -Dauth.secret=A5JKVKuSHp5Kme2qcMFlvMqEKbN+QBNF0tRuFleT" $ java -jar /mongoose.jar
Note:
Note that at each point [item size, loaders number] Mongoose runs a chain of operations specified using scenario.type.chain.load configuration parameter. In the example above, Mongoose will create data items and read them back four times [1, 1], [1, 10], [10, 1], and [10, 10].
Note: old name of scenario.type.rampup.connCounts is scenario.type.rampup.threadCounts. How to deal with data item size distribution
It's possible to specify the size limits and for new data items. The configuration parameters are:
data.size.min
data.size.max
data.size configuration parameter value overrides both data.size.min and data.size.max values making all the written data items having the same (fixed) size.
There are also the parameter:
data.size.bias
which may have any non-negative floating point value. 0 < data.size.bias < 1 Data item sizes are expected to be biased to data.size.max value. data.size.bias = 1 (by default) No effect, uniform data item sizes distribution between data.size.min and data.size.max is expected data.size.bias > 1 Data item sizes are expected to be biased to data.size.min value.
The biasing effect is may be seen on the distribution chart below for bias degree values of 0.2, 0.5, 1, 2 and 5. The power of each object set is equal to 10000. There are a higher probability density for larger data objects if bias degree is less than 1, almost uniform sizes distribution if bias degree is equal to 1 and higher probability density for smaller data objects if bias degree is more than 1.
How to use Mongoose as a library
It is possible to use Mongoose as a library that generates requests to a data storage. Mongoose programmatic API is in place to facilitate Mongoose embedding. See Mongoose Embedding Manual for more detail. How to use storage mock
The storage mock may be used for checking the Mongoose performance itself what may be helpful to rule out/confirm any performance degradation on the client either storage side.
Supported currently load types:
create
read
update
append
delete
To run mongoose in storage mock mode execute the following command: $ java -jar /mongoose.jar wsmock
Storage mock listens only one socket by default. The count of listening "heads" may be specified with the storage.mock.headCount argument explicitly. For instance: $ java -Dstorage.mock.headCount=2 -jar /mongoose.jar wsmock
As a result, Cinderella will start with 2 heads. 2015-04-23T15:39:19,831 I main Configuration parameters: +--------------------------------+----------------------------------------------------------------+ | Key | Value | +--------------------------------+----------------------------------------------------------------+ | api.name | s3 | | data.buffer.ring.seed | 7a42d9c483244167 | | data.buffer.ring.size | 1MB | | load.limit.count | 0 | | load.limit.time | 0s | | scenario.name | single | | storage.addrs | 127.0.0.1 | | run.version | 0.7.0 | | run.id | 2015.04.23.15.39.18.661 | | run.mode | cinderella | +--------------------------------+----------------------------------------------------------------+ 2015-04-23T15:39:19,987 I Cinderella main Starting with 2 heads 2015-04-23T15:39:20,236 I Cinderella main Listening the ports 9020 .. 9021
When working with Cinderella, make sure it listens the port Mongoose uses for data access.
If you want Cinderella to start with a predefined data items set, use data.src.fpath configuration parameter to point to a file that contains a list of data items to pre-create: $ java -Ditem.src.file=/items.csv -jar /mongoose.jar cinderella How to write new data items filled with zero bytes $ java -Ddata.content.fpath=<mongoose_root_dir>/conf/content/zerobytes [OTHER_ARGS] -jar <mongoose_root_dir>/mongoose.jar How to write new data items filled with equal bytes $ java -Ddata.content.fpath=<mongoose_root_dir>/conf/content/equalbytes [OTHER_ARGS] -jar <mongoose_root_dir>/mongoose.jar How to write new data items filled with text from Rikki-Tikki-Tavi tale by R. Kipling $ java -Ddata.content.fpath=<mongoose_root_dir>/conf/content/textexample -jar <mongoose_root_dir>/mongoose.jar How to write new data items filled with custom data from an external file $ java -Ddata.content.fpath=<path_to_user_file_with_data> -jar <mongoose_root_dir>/mongoose.jar How to create a lot of buckets concurrently $ java -Ditem.class=container -Dload.connections=1000 -jar <mongoose_root_dir>/mongoose.jar How to read a lot of buckets concurrently $ java -Ditem.class=container -Dload.connections=1000 -Dscenario.type.single.load=read -Ditem.src.file=<PATH_TO_ITEMS_CSV_OUTPUT_FILE> -jar <mongoose_root_dir>/mongoose.jar How to delete a lot of buckets concurrently $ java -Ditem.class=container -Dload.connections=1000 -Dscenario.type.single.load=delete -Ditem.src.file=<PATH_TO_ITEMS_CSV_OUTPUT_FILE> -jar <mongoose_root_dir>/mongoose.jar How to perform a load over the Swift containers instead of buckets $ java -Ditem.class=container -Dload.connections=1000 -Dapi.name=swift -Dscenario.type.single.load=<create|read|delete> [-Ditem.src.file=<PATH_TO_ITEMS_CSV_OUTPUT_FILE>] -jar <mongoose_root_dir>/mongoose.jar How to create the objects in the specific subdirectory on the storage side
The example below will perform the load (create/read/delete) job only in the specified directory "/<BUCKET_OR_CONTAINER>/a/bb/cc" on the storage side: $ java -Ditem.fsAccess=true -Ditem.prefix=a/bb/ccc -Dscenario.type.single.load=<create|read|delete> [-Ditem.src.file=<PATH_TO_ITEMS_CSV_OUTPUT_FILE>] -jar <mongoose_root_dir>/mongoose.jar How to write N files to the specified directory $ java -Ditem.class=file -Ditem.prefix=/var/dstdir -Dload.limit.count= [-Ddata.size=123KB] -jar <mongoose_root_dir>/mongoose.jar
The functionality is available since v1.2.0
It is possible also to read/update/append/delete the files.
The naming of the files created is random.
How to create N subdirectories into the specified directory $ java -Ditem.class=directory -Ditem.prefix=/var/dstdir -Dload.limit.count= -jar <mongoose_root_dir>/mongoose.jar
The functionality is available since v1.2.0
It is possible also to read and delete the directories.
Specifying the size has no effect when working with directories.
How to add custom HTTP headers to the requests generated $ java -Dhttp.customHeaders.= -jar <mongoose_root_dir>/mongoose.jar
The functionality is available since v1.2.0
How to disable the console output coloring
Go to the file conf/logging.json using the text editor, then go to the line ~#45
in the attribute "pattern" value remove the leading "%highlight{" and trailing "}" characters
The functionality is available since v1.2.0
How to write the items with names in the sequential ascending order $ java -Ditem.naming=asc -jar <mongoose_root_dir>/mongoose.jar
The basic functionality is available since v1.2.0
How to write the items with names in the sequential descending order $ java -Ditem.naming=desc -jar <mongoose_root_dir>/mongoose.jar
The basic functionality is available since v1.2.0
How to write the items with decimal names starting from 1000000 to 9999999 $ java -Ditem.naming.type=asc -Ditem.naming.length=7 -Ditem.naming.radix=10 -Ditem.naming.offset=1000000 -Dload.limit.count=8999999 -jar <mongoose_root_dir>/mongoose.jar
The functionality is available since v1.3.0
How to write the items with names having a prefix and a binary random number $ java -Ditem.naming.type=desc -Ditem.naming.length=70 -Ditem.naming.prefix=yohoho -Ditem.naming.radix=2 -jar <mongoose_root_dir>/mongoose.jar
The functionality is available since v1.3.0
How to generate custom HTTP headers with dynamic values $ java -Dhttp.customHeaders.myOwnHeaderName=MyOwnHeaderValue\ %d[0-1000]\ %f{###.##}[-0.1–0.01]\ %D{yyyy-MM-dd'T'HH:mm:ssZ}1970/01/01-2016/01/01] -jar mongoose-1.4.0/mongoose.jar
The functionality is available since v1.3.0 without the definition of a format pattern, since v1.4.0 with the definition of a format pattern.
How to write the files using the variable path $ java -Ditem.class=file -Ditem.prefix=/var/%p{;} -Dload.limit.count= [-Ddata.size=123KB] -jar <mongoose_root_dir>/mongoose.jar
It may be neccessary to escape semicolon (;) with a backslash.
D is a maximum directory hierarchy depth (positive integer)
W is a "width" meaning the max count of the subdirectories on each level (positive integer)
The functionality is available since v1.4.0
- Overview
- Deployment
- User Guide
- Troubleshooting
- Reference