Skip to content

Commit

Permalink
Adds manage multiple clusters recipe
Browse files Browse the repository at this point in the history
Fix links in similarity search recipe
  • Loading branch information
Westwooo committed Sep 5, 2024
1 parent 348144d commit e856a48
Show file tree
Hide file tree
Showing 3 changed files with 178 additions and 12 deletions.
4 changes: 3 additions & 1 deletion docs/recipes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
Welcome to the recipes section of the Couchbase Shell `cbsh` documentation.
Here you can find some examples of the more complicated tasks that can be performed using `cbsh`.

include::recipes/managing_multiple_clusters.adoc[]

include::recipes/similarity_search.adoc[]

include::recipes/useful-snippets.adoc[]
include::recipes/useful-snippets.adoc[]
164 changes: 164 additions & 0 deletions docs/recipes/managing_multiple_clusters.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
== Managing multiple clusters

CBShell is a powerful tool that can be used to interact with fleets comprised of a mix of self-managed and Capella clusters.
Say we have the following four clusters registered with CBShell:

```
๐Ÿ‘ค Charlie ๐Ÿ  obligingfaronmoller in โ˜๏ธ default._default._default
> cb-env managed
โ•ญโ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ # โ”‚ active โ”‚ tls โ”‚ identifier โ”‚ username โ”‚ capella_organization โ”‚ project โ”‚
โ”œโ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 0 โ”‚ false โ”‚ true โ”‚ systemtest โ”‚ Administrator โ”‚ my-org โ”‚ CBShell Testing โ”‚
โ”‚ 1 โ”‚ false โ”‚ false โ”‚ localdev โ”‚ Administrator โ”‚ โ”‚ โ”‚
โ”‚ 2 โ”‚ false โ”‚ true โ”‚ prod โ”‚ Administrator โ”‚ my-org โ”‚ CBShell Testing โ”‚
โ”‚ 3 โ”‚ true โ”‚ true โ”‚ ci โ”‚ Administrator โ”‚ my-org โ”‚ CBShell Testing โ”‚
โ•ฐโ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
```
There is one self-managed cluster (localdev) and three Capella clusters.
Imagine that we want to perform some general health checks on this set of clusters, a good starting point is the https://couchbase.sh/docs/#_listing_nodes[nodes] command with the https://couchbase.sh/docs/#_working_with_clusters[clusters] flag.
[options="nowrap"]
```
๐Ÿ‘ค Charlie ๐Ÿ  localdev in ๐Ÿ—„ travel-sample._default._default
> nodes --clusters *
โ•ญโ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ # โ”‚ cluster โ”‚ hostname โ”‚ status โ”‚ services โ”‚ version โ”‚ os โ”‚ memory_total โ”‚ memory_free โ”‚ ... โ”‚
โ”œโ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 0 โ”‚ localdev โ”‚ 192.168.107.128:8091 โ”‚ healthy โ”‚ search,indexing,kv,query โ”‚ 7.6.2-3505-enterprise โ”‚ aarch64-unknown-linux-gnu โ”‚ 6201221120 โ”‚ 2841657344 โ”‚ ... โ”‚
โ”‚ 1 โ”‚ localdev โ”‚ 192.168.107.129:8091 โ”‚ healthy โ”‚ search,indexing,kv,query โ”‚ 7.6.2-3505-enterprise โ”‚ aarch64-unknown-linux-gnu โ”‚ 6201221120 โ”‚ 2842959872 โ”‚ ... โ”‚
โ”‚ 2 โ”‚ localdev โ”‚ 192.168.107.130:8091 โ”‚ healthy โ”‚ search,indexing,kv,query โ”‚ 7.6.2-3505-enterprise โ”‚ aarch64-unknown-linux-gnu โ”‚ 6201221120 โ”‚ 2843160576 โ”‚ ... โ”‚
โ”‚ 3 โ”‚ prod โ”‚ svc-dqi-node-001.lhb4l06lajhydwmk.cloud.couchbase.com:8091 โ”‚ healthy โ”‚ indexing,kv,query โ”‚ 7.6.2-3721-enterprise โ”‚ x86_64-pc-linux-gnu โ”‚ 16776548352 โ”‚ 15518982144 โ”‚ ... โ”‚
โ”‚ 4 โ”‚ prod โ”‚ svc-dqi-node-002.lhb4l06lajhydwmk.cloud.couchbase.com:8091 โ”‚ healthy โ”‚ indexing,kv,query โ”‚ 7.6.2-3721-enterprise โ”‚ x86_64-pc-linux-gnu โ”‚ 16776548352 โ”‚ 15518420992 โ”‚ ... โ”‚
โ”‚ 5 โ”‚ prod โ”‚ svc-dqi-node-003.lhb4l06lajhydwmk.cloud.couchbase.com:8091 โ”‚ healthy โ”‚ indexing,kv,query โ”‚ 7.6.2-3721-enterprise โ”‚ x86_64-pc-linux-gnu โ”‚ 16776544256 โ”‚ 15501099008 โ”‚ ... โ”‚
โ”‚ 6 โ”‚ ci โ”‚ svc-dqi-node-001.fwplhqyopu9pgolq.cloud.couchbase.com:8091 โ”‚ healthy โ”‚ indexing,kv,query โ”‚ 7.6.2-3721-enterprise โ”‚ x86_64-pc-linux-gnu โ”‚ 16277504000 โ”‚ 14538944512 โ”‚ ... โ”‚
โ”‚ 7 โ”‚ ci โ”‚ svc-dqi-node-002.fwplhqyopu9pgolq.cloud.couchbase.com:8091 โ”‚ healthy โ”‚ indexing,kv,query โ”‚ 7.6.2-3721-enterprise โ”‚ x86_64-pc-linux-gnu โ”‚ 16277504000 โ”‚ 14559510528 โ”‚ ... โ”‚
โ”‚ 8 โ”‚ ci โ”‚ svc-dqi-node-003.fwplhqyopu9pgolq.cloud.couchbase.com:8091 โ”‚ healthy โ”‚ indexing,kv,query โ”‚ 7.6.2-3721-enterprise โ”‚ x86_64-pc-linux-gnu โ”‚ 16277504000 โ”‚ 14565412864 โ”‚ ... โ”‚
โ”‚ 9 โ”‚ systemtest โ”‚ svc-dqi-node-001.lyl8kbhzdovyqhv.cloud.couchbase.com:8091 โ”‚ healthy โ”‚ indexing,kv,query โ”‚ 7.6.2-3721-enterprise โ”‚ x86_64-pc-linux-gnu โ”‚ 16766582784 โ”‚ 15491842048 โ”‚ ... โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ•ฏ
```
This gives us plenty of information, but sometimes it can be a bit difficult to read.
We can make things much easier with some simple reformatting.
To focus on the free memory that each cluster has, we can https://www.nushell.sh/commands/docs/select.html[select] just the relevant columns:
```
๐Ÿ‘ค Charlie ๐Ÿ  localdev in ๐Ÿ—„ travel-sample._default._default
> nodes --clusters * | select cluster memory_free
โ•ญโ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ # โ”‚ cluster โ”‚ memory_free โ”‚
โ”œโ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 0 โ”‚ localdev โ”‚ 2841657344 โ”‚
โ”‚ 1 โ”‚ localdev โ”‚ 2842959872 โ”‚
โ”‚ 2 โ”‚ localdev โ”‚ 2843160576 โ”‚
โ”‚ 3 โ”‚ prod โ”‚ 15518982144 โ”‚
โ”‚ 4 โ”‚ prod โ”‚ 15518420992 โ”‚
โ”‚ 5 โ”‚ prod โ”‚ 15501099008 โ”‚
โ”‚ 6 โ”‚ ci โ”‚ 14538944512 โ”‚
โ”‚ 7 โ”‚ ci โ”‚ 14559510528 โ”‚
โ”‚ 8 โ”‚ ci โ”‚ 14565412864 โ”‚
โ”‚ 9 โ”‚ systemtest โ”‚ 15491842048 โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
```
We could then convert the `memory_free` column from bytes to gigabytes as follows:
[options="nowrap"]
```
๐Ÿ‘ค Charlie ๐Ÿ  localdev in ๐Ÿ—„ travel-sample._default._default
> nodes --clusters * | each {|n| $n | update memory_free ($n.memory_free * 1B)} | select cluster memory_free
โ•ญโ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ # โ”‚ cluster โ”‚ memory_free โ”‚
โ”œโ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 0 โ”‚ localdev โ”‚ 2.6 GiB โ”‚
โ”‚ 1 โ”‚ localdev โ”‚ 2.6 GiB โ”‚
โ”‚ 2 โ”‚ localdev โ”‚ 2.6 GiB โ”‚
โ”‚ 3 โ”‚ prod โ”‚ 14.5 GiB โ”‚
โ”‚ 4 โ”‚ prod โ”‚ 14.5 GiB โ”‚
โ”‚ 5 โ”‚ prod โ”‚ 14.4 GiB โ”‚
โ”‚ 6 โ”‚ ci โ”‚ 13.5 GiB โ”‚
โ”‚ 7 โ”‚ ci โ”‚ 13.6 GiB โ”‚
โ”‚ 8 โ”‚ ci โ”‚ 13.6 GiB โ”‚
โ”‚ 9 โ”‚ systemtest โ”‚ 14.4 GiB โ”‚
โ•ฐโ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
```
We do this by iterating over each node and https://www.nushell.sh/commands/docs/update.html[updating] the value in the `memory_free` column by multiplying the current value by nushell's inbuilt https://www.nushell.sh/book/types_of_data.html#file-sizes[File Size] datatype.
While it is somewhat useful to know the free memory that each cluster has, it'd be more useful for our healthcheck to know the memory utilization for each cluster.
Unfortunately the info returned by `nodes` does not include the memory utilization, however there are two columns that can be used to calculate this: `memory_free` and `memory_total`.
[options="nowrap"]
```
๐Ÿ‘ค Charlie ๐Ÿ  localdev in ๐Ÿ—„ travel-sample._default._default
> nodes --clusters * | each {|n| $n | insert utilization ((($n.memory_total - $n.memory_free) / $n.memory_total) * 100 ) } | select cluster utilization | sort-by utilization --reverse
โ•ญโ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ # โ”‚ cluster โ”‚ utilization โ”‚
โ”œโ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 0 โ”‚ localdev โ”‚ 54.32 โ”‚
โ”‚ 1 โ”‚ localdev โ”‚ 54.32 โ”‚
โ”‚ 2 โ”‚ localdev โ”‚ 54.28 โ”‚
โ”‚ 3 โ”‚ ci โ”‚ 10.71 โ”‚
โ”‚ 4 โ”‚ ci โ”‚ 10.60 โ”‚
โ”‚ 5 โ”‚ ci โ”‚ 10.50 โ”‚
โ”‚ 6 โ”‚ prod โ”‚ 7.61 โ”‚
โ”‚ 7 โ”‚ systemtest โ”‚ 7.59 โ”‚
โ”‚ 8 โ”‚ prod โ”‚ 7.52 โ”‚
โ”‚ 9 โ”‚ prod โ”‚ 7.49 โ”‚
โ•ฐโ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
```
For https://www.nushell.sh/commands/docs/each.html[each] of the nodes we add a new column called utilization and we calculate the percentage disk used with:
```
(($n.memory_total - $n.memory_free) / $n.memory_total) * 100
```
Finally we https://www.nushell.sh/commands/docs/sort-by.html[sort-by] descending utilization.
The results of such resource checks can be useful when we are deciding where to create new resources on our clusters.
For example imagine that we want to create a 1GB bucket on any one of our clusters.
Firstly we could just try to create it on the active cluster:
[options="nowrap"]
```
๐Ÿ‘ค Charlie ๐Ÿ  localdev in ๐Ÿ—„ travel-sample._default._default
> buckets create BigBucket 1000
Error: ร— Unexpected status code
โ•ญโ”€[entry #8:1:1]
1 โ”‚ buckets create BigBucket 1000
ยท โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€
ยท โ•ฐโ”€โ”€
โ•ฐโ”€โ”€โ”€โ”€
help: Unexpected status code: 400, body: {"errors":{"ramQuota":"RAM quota specified is too large to be provisioned into this cluster."},"summaries":{"ramSummary":
{"total":1610612736,"otherBuckets":1610612736,"nodesCount":3,"perNodeMegs":1000,"thisAlloc":3145728000,"thisUsed":0,"free":-3145728000},"hddSummary":
{"total":183855980544,"otherData":27159966105,"otherBuckets":418430976,"thisUsed":0,"free":156277583463}}}
```
This failed since the https://couchbase.sh/docs/#_cb_env_cluster[active cluster] doesn't have enough memory to support such a large bucket.
We can use `nodes` to find the cluster with the most free memory and create the bucket there:
[options="nowrap"]
```
๐Ÿ‘ค Charlie ๐Ÿ  localdev in ๐Ÿ—„ travel-sample._default._default
> nodes --clusters * | sort-by memory_free --reverse | first | get cluster | buckets create BigBucket 1000 --clusters $in
```
Here we have fetched the nodes for all the registered clusters, sorted by the descending amount of memory free and got the cluster name.
Then we pipe the cluster name into `buckets create` command, using `$in` to access the piped value, and since no error is returned it is a success.
To double check the success and see where our bucket was created we can do:
[options="nowrap"]
```
๐Ÿ‘ค Charlie ๐Ÿ  localdev in ๐Ÿ—„ travel-sample._default._default
> buckets --clusters a* | where name == "BigBucket"
โ•ญโ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ # โ”‚ cluster โ”‚ name โ”‚ type โ”‚ replicas โ”‚ min_durability_level โ”‚ ram_quota โ”‚ flush_enabled โ”‚ cloud โ”‚ max_expiry โ”‚
โ”œโ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 0 โ”‚ prod โ”‚ BigBucket โ”‚ couchbase โ”‚ 1 โ”‚ none โ”‚ 1000.0 MiB โ”‚ false โ”‚ true โ”‚ 0 โ”‚
โ•ฐโ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
```
Loading

0 comments on commit e856a48

Please sign in to comment.