Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master'
Browse files Browse the repository at this point in the history
# Conflicts:
#	README.md
  • Loading branch information
niccottrell committed Jun 27, 2018
2 parents c5203b6 + 57b4fa4 commit 87ea0fb
Show file tree
Hide file tree
Showing 4 changed files with 54 additions and 47 deletions.
83 changes: 45 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,26 @@
***NOTE***
Recently upgraded to MongoDB 3.6.x Driver.
Recently upgraded to [MongoDB 3.6.x Java Driver](http://mongodb.github.io/mongo-java-driver/3.6/).

Introduction
------------
This is open source, immature, and undoubtedly buggy code - if you find bugs fix them and send me a pull request or let me know ([email protected])
This tool is to make it easy to answer many of the questions people have during a MongoDB 'Proof of Concept'
Disclaimer: POCDriver is NOT in any way an official MongoDB product or project.

This is open source, immature, and undoubtedly buggy code. If you find bugs please fix them and [send a pull request](https://github.com/johnlpage/POCDriver/pulls) or report in the [GitHub issue queue](https://github.com/johnlpage/POCDriver/issues).

* How fast will it be on my hardware.
* How could it handle my workload.
* How does MongoDB scale.
* How does the High Availability Work / how do I handle a failover.
This tool is designed to make it easy to answer many of the questions people have during a MongoDB 'Proof of Concept':

* How fast will MongoDB be on my hardware?
* How could MongoDB handle my workload?
* How does MongoDB scale?
* How does High Availability work (aka How do I handle a failover)?

POCDriver a single JAR file which allows you to specify and run a number of different workloads easily from the command line. It is intended to show how MongoDB should be used for various tasks and avoid's testing your own client code versus MongoDB's capabilities. POCDriver is an alternative to using generic tools like YCSB. Unlike these tools POCDriver:
* Only works with MongoDB - showing what MongoDB can do rather than comparing lowest common denominator between systems that aren't directly comparable.
POCDriver is a single JAR file which allows you to specify and run a number of different workloads easily from the command line. It is intended to show how MongoDB should be used for various tasks and avoids testing your own client code versus MongoDB's capabilities.

* Includes much more sophisticated workloads - using the appropriate MongoDB feature.
POCDriver is an alternative to using generic tools like YCSB. Unlike these tools, POCDriver:

This is NOT in any way an official MongoDB product or project.
* Only works with MongoDB. This shows what MongoDB can do rather than comparing lowest common denominator between systems that aren't directly comparable.

* Includes much more sophisticated workloads using the appropriate MongoDB feature.

Build
-----
Expand All @@ -36,50 +38,55 @@ and you will find `POCDriver.jar` in `target` folder.
Basic usage
-----------

If run with no arguments POCDriver will insert records onto a mongoDB running on localhost as quickly as possible.
There will be only the _id index and records will have 10 fields.
If run with no arguments, POCDriver will try to insert documents into a MongoDB deployment running on localhost as quickly as possible.

There will be only the `_id` index and documents will have 10 fields.

Use `--print` to see what the records look like.
Use `--print` to see what the documents look like.

Client options
-------------
```
-h show help
-p show what the records look like in the test
-p show what the documents look like in the test
-t how many threads to run on the client and thus how many connections.
-s what threshold to consider slow when reporting latency percentages in ms
-o output stats to a file rather then the screen
-n use a namespace 'schema.collection' of your choice
-d how long to run the loader for.
-q *try* to limit rate to specified ops per second.
-c a mongodb connection string, you can include write concerns and thread pool size info in this
-c a MongoDB connection string (note: you can include write concerns and thread pool size info in this)
```


Basic operations.
-----------------
```
-k Fetch a single record using it's primary key
-r fetch a range of 10 records
-u increment an integer field in a random record
-i add a new record
-k Fetch a single document using its primary key
-r fetch a range of 10 documents
-u increment an integer field in a random document
-i add a new document
```

Complex operations
------------------
```
-g update a random value in the array (must have arrays enabled)
-v perform sets of operations on a stack so -v iuu will insert then update that record twice -v kui will find a record then update it then insert a new one. the last record is placed on a stack and p pops it off so
-v kiippu Finds a record, adds two, then pops them off and updates the original one found.
```
-v perform sets of operations on a stack:
-v iuu will insert then update that document twice
-v kui will find a document, update it, then insert a new document
The last document is placed on a stack and p pops it off so:
-v kiippu Finds a document, adds two, then pops them off and updates the original document found.
```

Note: If you specify a workflow via the `-v` flag, the basic operations above will be ignored and the operations listed will be performed instead.

Control options
---------------
```
-m when updating a record use findAndModify to fetch a copy of the new incremented value
-j when updating or querying limit the set to the last N% of records added
-m when updating a document use findAndModify to fetch a copy of the new incremented value
-j when updating or querying limit the set to the last N% of documents added
-b what size to use for operation batches.
--rangedocs number of documents to fetch for range queries (default 10)
--updatefields number of fields to update (default 1)
Expand All @@ -88,15 +95,15 @@ Control options
Collection options
-------------------
```
-x How many fields to index aside from _id
-w Do not shard this collection on a sharded system
-x how many fields to index aside from _id
-w do not shard this collection on a sharded system
-e empty this collection at the start of the run.
```
Record shape options
Document shape options
--------------------
```
-a add an X by Y array of integers to each record using -a X:Y
-f aside from arrays and _id add f fields to the record, after the first 3 every third is an integer, every fifth a date, the rest are text.
-a add an X by Y array of integers to each document using -a X:Y
-f aside from arrays and _id add f fields to the document, after the first 3 every third is an integer, every fifth a date, the rest are text.
-l how many characters to have in the text fields
--depth The depth of the document to create.
```
Expand All @@ -105,7 +112,7 @@ Example
-------

```
MacPro:POCDriver jlp$ java -jar POCDriver.jar -p -a 3:4
$ java -jar POCDriver.jar -p -a 3:4
MongoDB Proof Of Concept - Load Generator
{
"_id": {
Expand Down Expand Up @@ -134,24 +141,24 @@ MongoDB Proof Of Concept - Load Generator
}
MacPro:POCDriver jlp$ java -jar POCDriver.jar -k 20 -i 10 -u 10 -b 20
$ java -jar POCDriver.jar -k 20 -i 10 -u 10 -b 20
MongoDB Proof Of Concept - Load Generator
------------------------
After 10 seconds, 20016 new records inserted - collection has 89733 in total
After 10 seconds, 20016 new documents inserted - collection has 89733 in total
1925 inserts per second since last report 99.75 % in under 50 milliseconds
3852 keyqueries per second since last report 99.99 % in under 50 milliseconds
1949 updates per second since last report 99.84 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds
------------------------
After 20 seconds, 53785 new records inserted - collection has 123502 in total
After 20 seconds, 53785 new documents inserted - collection has 123502 in total
3377 inserts per second since last report 99.91 % in under 50 milliseconds
6681 keyqueries per second since last report 99.99 % in under 50 milliseconds
3322 updates per second since last report 99.94 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds
------------------------
After 30 seconds, 69511 new records inserted - collection has 139228 in total
After 30 seconds, 69511 new documents inserted - collection has 139228 in total
1571 inserts per second since last report 99.92 % in under 50 milliseconds
3139 keyqueries per second since last report 99.99 % in under 50 milliseconds
1595 updates per second since last report 99.94 % in under 50 milliseconds
Expand All @@ -173,4 +180,4 @@ Requirements to Build
Troubleshooting
---------------

If you are running a mongod with `--auth` enabled, you must pass a user and password with read/write and replSetGetStatus privileges, e.g. `readWriteAnyDatabase` and `clusterMonitor` roles.
If you are running a mongod with `--auth` enabled, you must pass a user and password with read/write and replSetGetStatus privileges (e.g. `readWriteAnyDatabase` and `clusterMonitor` roles).
2 changes: 1 addition & 1 deletion src/main/java/com/johnlpage/pocdriver/POCDriver.java
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ private static void printTestDocument(final POCTestOptions testOpts) {
new DocumentCodec().encode(binaryWriter, tr.internalDoc, EncoderContext.builder().build());
int length = binaryWriter.getBsonOutput().getSize();

System.out.println(String.format("Records are %.2f KB each as BSON", (float) length / 1024));
System.out.println(String.format("Documents are %.2f KB each as BSON", (float) length / 1024));
}

}
12 changes: 6 additions & 6 deletions src/main/java/com/johnlpage/pocdriver/POCTestOptions.java
Original file line number Diff line number Diff line change
Expand Up @@ -70,23 +70,23 @@ public class POCTestOptions {

Options cliopt;
cliopt = new Options();
cliopt.addOption("a","arrays",true,"Shape of any arrays in new sample records x:y so -a 12:60 adds an array of 12 length 60 arrays of integers");
cliopt.addOption("a","arrays",true,"Shape of any arrays in new sample documents x:y so -a 12:60 adds an array of 12 length 60 arrays of integers");
cliopt.addOption("b","bulksize",true,"Bulk op size (default 512)");
cliopt.addOption("c","host",true,"Mongodb connection details (default 'mongodb://localhost:27017' )");
cliopt.addOption("c","host",true,"MongoDB connection details (default 'mongodb://localhost:27017' )");
cliopt.addOption("d","duration",true,"Test duration in seconds, default 18,000");
cliopt.addOption("e","empty",false,"Remove data from collection on startup");
cliopt.addOption("f","numfields",true,"Number of top level fields in test records (default 10)");
cliopt.addOption("f","numfields",true,"Number of top level fields in test documents (default 10)");
cliopt.addOption(null,"depth",true,"The depth of the document created (default 0)");
cliopt.addOption("g","arrayupdates",true,"Ratio of array increment ops requires option 'a' (default 0)");
cliopt.addOption("h","help",false,"Show Help");
cliopt.addOption("i","inserts",true,"Ratio of insert operations (default 100)");
cliopt.addOption("j","workingset",true,"Percentage of database to be the working set (default 100)");
cliopt.addOption("k","keyqueries",true,"Ratio of key query operations (default 0)");
cliopt.addOption("l","textfieldsize",true,"Length of text fields in bytes (default 30)");
cliopt.addOption("m","findandmodify",false,"Use findAndModify instead of update and retrieve record (with -u or -v only)");
cliopt.addOption("m","findandmodify",false,"Use findAndModify instead of update and retrieve document (with -u or -v only)");
cliopt.addOption("n","namespace",true,"Namespace to use , for example myDatabase.myCollection");
cliopt.addOption("o","logfile",true,"Output stats to <file> ");
cliopt.addOption("p","print",false,"Print out a sample record according to the other parameters then quit");
cliopt.addOption("p","print",false,"Print out a sample document according to the other parameters then quit");
cliopt.addOption("q","opsPerSecond",true,"Try to rate limit the total ops/s to the specified amount");
cliopt.addOption("r","rangequeries",true,"Ratio of range query operations (default 0)");
cliopt.addOption("s","slowthreshold",true,"Slow operation threshold in ms(default 50)");
Expand All @@ -99,7 +99,7 @@ public class POCTestOptions {
cliopt.addOption("z","zipfian",true,"Enable zipfian distribution over X number of documents (default 0)");
cliopt.addOption(null,"threadIdStart",true,"Start 'workerId' for each thread. 'w' value in _id. (default 0)");
cliopt.addOption(null,"fulltext",false,"Create fulltext index (default false)");
cliopt.addOption(null,"binary",true,"add a binary blob of size KB");
cliopt.addOption(null,"binary",true,"Add a binary blob of size KB");
cliopt.addOption(null,"rangedocs",true,"Number of documents to fetch for range queries (default 10)");
cliopt.addOption(null,"updatefields",true,"Number of fields to update (default 1)");
cliopt.addOption(null,"projectfields",true,"Number of fields to project in finds (default 0, which is no projection)");
Expand Down
4 changes: 2 additions & 2 deletions src/main/java/com/johnlpage/pocdriver/POCTestReporter.java
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ private void logData() {
testOpts.numShards = (int) shards.count();
}
Date todaysdate = new Date();
System.out.format("After %d seconds (%s), %,d new records inserted - collection has %,d in total \n",
System.out.format("After %d seconds (%s), %,d new documents inserted - collection has %,d in total \n",
testResults.GetSecondsElapsed(), DF_TIME.format(todaysdate), insertsDone, testResults.initialCount + insertsDone);

if (outfile != null) {
Expand Down Expand Up @@ -116,7 +116,7 @@ public void finalReport() {
Long secondsElapsed = testResults.GetSecondsElapsed();

System.out.println("------------------------");
System.out.format("After %d seconds, %d new records inserted - collection has %d in total \n",
System.out.format("After %d seconds, %d new documents inserted - collection has %d in total \n",
secondsElapsed, insertsDone, testResults.initialCount + insertsDone);

String[] opTypes = POCTestResults.opTypes;
Expand Down

0 comments on commit 87ea0fb

Please sign in to comment.