Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: cn337131 <[email protected]>
  • Loading branch information
tb06904 and cn337131 authored Oct 3, 2023
1 parent 03d84f5 commit b25a047
Show file tree
Hide file tree
Showing 4 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion docs/administration-guide/gaffer-config/graph-metadata.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Graph Metadata Configuration

The graph configuration file is a JSON file that configures few bits of the
The graph configuration file is a JSON file that configures a few bits of the
Gaffer graph. Primarily it is used to set the name and description along with
any additional hooks to run before an operation chain e.g. to impose limits on
max results etc. For example, a simple graph configuration file may look like:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Gaffer Images

As demonstrated in the [quickstart](../quickstart.md) its very simple to start
As demonstrated in the [quickstart](../quickstart.md) it is very simple to start
up a basic in memory gaffer graph using the available Open Container Initiative
(OCI) images.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

After reading the [previous page](./gaffer-images.md) you should have a good
understanding of what images are available for Gaffer and how to configure them
to you needs. However, before running a deployment backed by Accumulo you will
to your needs. However, before running a deployment backed by Accumulo you will
need to know a bit of background on Hadoop to understand how the data will scale
and be distributed.

Usually when deploying a container image you simply run the image and everything
is contained locally to the container (hence the name). For larger scale graphs
this less desireable as we will usually want to be able to scale and load
this is less desirable as we will usually want to be able to scale and load
balance the storage based on the volume of data; this is where Hadoop comes in.

!!! tip
Expand Down Expand Up @@ -86,7 +86,7 @@ Hadoop cluster which we can run multiple times to extend into a multi-node
cluster.

To run a Hadoop cluster we first need the configuration files for Hadoop which
we can then add into the running containers. As a start point you can use the
we can then add into the running containers. As a starting point you can use the
files from the
[`gaffer-docker`](https://github.com/gchq/gaffer-docker/tree/develop/docker/hdfs/conf)
repository, but you may wish to edit these for your deployment and can read more
Expand Down Expand Up @@ -235,11 +235,11 @@ following nodes/containers are needed:

The final container we need to start up is the REST API, this essentially gives
the front end so we can use containers together in a Gaffer cluster. The REST
API container is also where the configuration for the graph is applied such as,
API container is also where the configuration for the graph is applied, such as
the schema files and store properties.

To start up the REST API it is a similar process to the other containers;
however, there is a few more bind-mounts that need defining to configure the
however, there are a few more bind-mounts that need defining to configure the
graph (you can also build a custom image with files baked in).

```bash
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Running Gaffer on Kubernetes

Gaffers Open Container Initiative (OCI) images mean it is also possible to
Gaffer's Open Container Initiative (OCI) images mean it is also possible to
deploy via kubernetes to give an alternative scalable deployment. This guide
will assume the reader is familiar with general usage of kubernetes, further
reading is available in the [official documentation](https://kubernetes.io/docs/home/).
Expand Down

0 comments on commit b25a047

Please sign in to comment.