Skip to content

Commit

Permalink
Replaced old Query/Scan/Recordset API documentation with the newer, m…
Browse files Browse the repository at this point in the history
…ore elegant one
  • Loading branch information
khaf committed Jun 11, 2015
1 parent ee0dc4e commit 8caf00e
Show file tree
Hide file tree
Showing 4 changed files with 56 additions and 71 deletions.
2 changes: 1 addition & 1 deletion docs/aerospike.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Policies

### Policies

Policies contain the allowed values for policies for each of the [client](client.md) operations.
Policies contain the allowed values for operation conditions for each of the [client](client.md) operations.

For details, see [Policies Object](policies.md)

Expand Down
77 changes: 34 additions & 43 deletions docs/client.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,15 @@ To customize a Client with a ClientPolicy:
```go
clientPolicy := as.NewClientPolicy()
clientPolicy.ConnectionQueueSize = 64
clientPolicy.LimitConnectionsToQueueSize = true
clientPolicy.Timeout = 50 * time.Millisecond

client, err := as.NewClientWithPolicy(clientPolicy, "127.0.0.1", 3000)
```

*Notice*: Examples in the section are only intended to illuminate simple use cases without too much distraction. Always follow good coding practices in production.
*Notice*: Examples in the section are only intended to illuminate simple use cases without too much distraction. Always follow good coding practices in production. Always check for errors.

With a new client, you can use any of the methods specified below:
With a new client, you can use any of the methods specified below. You need only *ONE* client object. This object is goroutine-friendly, and pools its resources internally.

- [Methods](#methods)
- [Add()](#add)
Expand Down Expand Up @@ -63,7 +64,7 @@ add()
### Add(policy *WritePolicy, key *Key, bins BinMap) error

Using the provided key, adds values to the mentioned bins.
Bin value types should by of `integer` for the command to have any effect.
Bin value types should be of type `integer` for the command to have any effect.

Parameters:

Expand Down Expand Up @@ -94,7 +95,7 @@ append()
### Append(policy *WritePolicy, key *Key, bins BinMap) error

Using the provided key, appends provided values to the mentioned bins.
Bin value types should by of `string` or `ByteArray` for the command to have any effect.
Bin value types should be of type `string` or `[]byte` for the command to have any effect.

Parameters:

Expand Down Expand Up @@ -249,7 +250,7 @@ getheader()

### GetHeader(policy *BasePolicy, key *Key) (*Record, error)

Using the key provided, reads record metadata *ONLY* from the database cluster. Record metadata includes record generation and Expiration (TTL from the moment of retrieval, in seconds)
Using the key provided, reads *ONLY* record metadata from the database cluster. Record metadata includes record generation and Expiration (TTL from the moment of retrieval, in seconds)

```record.Bins``` will always be empty in resulting ```record```.

Expand Down Expand Up @@ -342,7 +343,7 @@ prepend()
### Prepend(policy *WritePolicy, key *Key, bins BinMap) error

Using the provided key, prepends provided values to the mentioned bins.
Bin value types should by of `string` or `ByteArray` for the command to have any effect.
Bin value types should be of type `string` or `[]byte` for the command to have any effect.

Parameters:

Expand Down Expand Up @@ -480,23 +481,13 @@ Example:
// scan the whole cluster
recordset, err := client.ScanAll(nil, "test", "demo")

L:
for{
select {
case record, open := <- recordset.Records:
if !open {
// scan completed successfully
break L
}
// do something
case err := <- recordset.Errors:
// check if error is a NodeError
if ne, ok := err.(NodeError); ok {
node := ne.Node
// do something
}
panic(err)
for res := range recordset.Results() {
if res.Err != nil {
// handle error; or close the recordset and break
}

// process record
fmt.Println(res.Record)
}
```

Expand Down Expand Up @@ -538,15 +529,13 @@ Example:

```go
idxTask, err := client.CreateIndex(nil, "test", "demo", "indexName", "binName", NUMERIC)
panicOnErr(err)

if err == nil {
// wait until index is created.
// OnComplete() channel will return nil on success and an error on errors
for err := range idxTask.OnComplete() {
if err != nil {
panic(err)
}
}
// wait until index is created.
// OnComplete() channel will return nil on success and an error on errors
err = <- idxTask.OnComplete()
if err != nil {
panic(err)
}
```

Expand Down Expand Up @@ -609,9 +598,11 @@ Example:
end`

regTask, err := client.RegisterUDF(nil, []byte(udfBody), "udf1.lua", LUA)
panicOnErr(err)

// wait until UDF is created
for err := range regTask.OnComplete(); err != nil {
err = <-regTask.OnComplete()
if err != nil {
panic(err)
}
```
Expand Down Expand Up @@ -640,9 +631,11 @@ Example:

```go
regTask, err := client.RegisterUDFFromFile(nil, "/path/udf.lua", "udf1.lua", LUA)
panicOnErr(err)

// wait until UDF is created
for err := range regTask.OnComplete(); err != nil {
err = <- regTask.OnComplete()
if err != nil {
panic(err)
}
```
Expand Down Expand Up @@ -702,9 +695,11 @@ Considering the UDF registered in RegisterUDF example above:
```go
statement := NewStatement("namespace", "set")
exTask, err := client.ExecuteUDF(nil, statement, "udf1", "testFunc1")
panicOnErr(err)

// wait until UDF is run on all records
for err := range exTask.OnComplete(); err != nil {
err = <- exTask.OnComplete()
if err != nil {
panic(err)
}
```
Expand Down Expand Up @@ -739,16 +734,12 @@ Example:
recordset, err := client.Query(nil, stm)

// consume recordset and check errors
L:
for {
select {
case rec, chanOpen := <-recordset.Records:
if !chanOpen {
break L
}
// do something
case err := <-recordset.Errors:
panic(err)
for res := recordset.Results() {
if res.Err != nil {
// handle error, or close the recordset and break
}

// process record
fmt.Println(res.Record)
}
```
46 changes: 19 additions & 27 deletions docs/datamodel.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ Fields are:
- `Bins` — Bins and their values are represented as a BinMap (map[string]interface{})
- `Key` — Associated Key pointer
- `Node` — Database node from which the record was retrieved from.
- `Duplicates` — If the writepolicy.GenerationPolicy is DUPLICATE, it will contain older versions of bin data
- `Expiration` — TimeToLive of the record in seconds. Shows in how many seconds the data will be erased if not updated.
- `Generation` — Record generation (number of times the record has been updated).

Expand Down Expand Up @@ -72,7 +71,9 @@ Simple example of a Read, Change, Update operation:
panicOnError(err)

// change data
rec.Bins["bin1"].(int) += 1
v := rec.Bins["bin1"].(int)
v += 1
rec.Bins["bin1"] = v

// update
err = client.Put(nil, key, rec.Bins)
Expand Down Expand Up @@ -100,23 +101,17 @@ Recordsets can be closed at any time to cancel the operation.
// scan the whole cluster
recordset, err := client.ScanAll(nil, "test", "demo")

L:
for{
select {
case record, open := <- recordset.Records:
if !open {
// scan completed successfully
break L
for res := range recordset.Results() {
if res.Err != nil {
// you may be able to find out on which node the error occured
if ne, ok := err.(NodeError); ok {
node := ne.Node
// do something
}
// do something
case err := <- recordset.Errors:
// check if error is a NodeError
if ne, ok := err.(NodeError); ok {
node := ne.Node
// do something
}
panic(err)
}

// process record
fmt.Println(res.Record)
}
```

Expand Down Expand Up @@ -206,17 +201,14 @@ The following optional attributes can also be changed in the statement struct:
recordset, err := client.Query(nil, stm)

// consume recordset and check errors
L:
for {
select {
case rec, chanOpen := <-recordset.Records:
if !chanOpen {
break L
}
// do something
case err := <-recordset.Errors:
panic(err)
for res := recordset.Results() {
if res.Err != nil {
// handle error
panic(res.Err)
}

// process record
fmt.Println(res.Record)
}
```

Expand Down
2 changes: 2 additions & 0 deletions docs/performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ Please let us know if you can suggest an improvement anywhere in the library.

At its maximum number of 256 for each client, and `proto-fd-max` set to 10000 in your server node configuration, you can safely have around 50 clients **per server node**. In practice, this will approach 150 high performing clients. You can change this pool size in `ClientPolicy`, and then initialize your `Client` object using `NewClientWithPolicy(policy **ClientPolicy, hostname string, port int)` initializer.

You can also guard against the number of new connections to each node using `ClientPolicy.LimitConnectionsToQueueSize = true`, so that if a connection is not available in the pool, the client will wait or timeout instead of creating a new client.

2. **Client Buffer Pool**: Client library pools its buffers to reduce memory allocation. Considering that unbounded memory pools are bugs you haven't found yet, our pool implementation enforces 2 bounds on pool:

2.1. Initial buffer sizes are big enough for most operations, so they won't need to increase (512 bytes by default)
Expand Down

0 comments on commit 8caf00e

Please sign in to comment.