Skip to content

Commit

Permalink
add schedulerName (#2294)
Browse files Browse the repository at this point in the history
  • Loading branch information
abby-cyber authored Oct 12, 2023
1 parent 950ab90 commit 0ee1c06
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 12 deletions.
30 changes: 19 additions & 11 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,17 +37,25 @@ jobs:
git checkout .
git checkout gh-pages
- name: Modify versions.json
run: |
import json
new_content = {'version': '3.5.0-sc', 'title': '3.5.0-sc', 'aliases': []}
with open('./versions.json', 'r') as f:
data = json.load(f)
for i, item in enumerate(data[:]):
if item.get('version') == new_content['version'] and item.get('title') == new_content['title']:
del data[i]
break
with open('./versions.json', 'w') as outfile:
json.dump(data, outfile, indent=2)
run: |
import json
new_content = {'version': '3.5.0-sc', 'title': '3.5.0-sc', 'aliases': []}
try:
with open('./versions.json', 'r') as f:
data = json.load(f)
# Remove the version from the list
data = [item for item in data if item.get('version') != new_content['version']]
# If you want to add it back to the end, uncomment the next line
# data.append(new_content)
with open('./versions.json', 'w') as outfile:
json.dump(data, outfile, indent=2)
except Exception as e:
print(f"An error occurred: {e}")
exit(1)
shell: python
# not public this branch; but push to web service
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl
|`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3. <br/>**Zone names CANNOT be modified once be set.**|
|`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage pods in the same zone.<br/>When set to `true`, the query is sent to the storage pods in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. |
|`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.|
|`spec.schedulerName`|`kube-scheduler`|Schedules the restarted Graph and Storage pods to the same Zone. The value must be set to `nebula-scheduler`.|
|`spec.topologySpreadConstraints`|-| It is a field in Kubernetes used to control the distribution of storage Pods. Its purpose is to ensure that your storage Pods are evenly spread across Zones. <br/>**To use the Zone feature, you must set the value of `topologySpreadConstraints[0].topologyKey` to `topology.kubernetes.io/zone` and the value of `topologySpreadConstraints[0].whenUnsatisfiable` to `DoNotSchedule`**. Run `kubectl get node --show-labels` to check the key. For more information, see [TopologySpread](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#example-multiple-topologyspreadconstraints).|

???+ note "Learn more about Zones in NebulaGraph Operator"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,8 @@
# Evenly distribute the Pods of the Storage service across Zones.
--set nebula.topologySpreadConstraints[0].topologyKey=topology.kubernetes.io/zone \
--set nebula.topologySpreadConstraints[0].whenUnsatisfiable=DoNotSchedule \
# Used to schedule restarted Graph or Storage Pods to the specified Zone.
--set nebula.schedulerName=nebula-scheduler \
--namespace="${NEBULA_CLUSTER_NAMESPACE}" \
```

Expand All @@ -146,7 +148,7 @@
Use the `--set` argument to set configuration parameters for the cluster. For example, `--set nebula.storaged.replicas=3` will set the number of replicas for the Storage service in the cluster to 3.


1. Check the status of the NebulaGraph cluster you created.
7. Check the status of the NebulaGraph cluster you created.

```bash
kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}"
Expand Down

0 comments on commit 0ee1c06

Please sign in to comment.