- You will need a live and functioning Route53 DNS zone in the AWS account you will be installing the new cluster(s) into. For example if you own example.com, you could create a hive.example.com subdomain in Route53, and ensure that you have made the appropriate NS entries under example.com to delegate to the Route53 zone. When creating a new cluster, the installer will make future DNS entries under hive.example.com as needed for the cluster(s). * Note that there is an additional mode of DNS management where Hive can automatically create delegated zones for approved base domains. i.e. if hive.example.com exists already, you can specify a base domain of cluster1.hive.example.com on your ClusterDeployment, and Hive will create this zone for you, wait for it to resolve, and then proceed with installation. See below for additional info.
- Determine what OpenShift release image you wish to install.
- Create a Kubernetes secret containing a docker registry pull secret (typically obtained from try.openshift.com).
oc create secret generic mycluster-pull-secret --from-file=.dockerconfigjson=/path/to/pull-secret --type=kubernetes.io/dockerconfigjson
- Create a Kubernetes secret containing a ssh key pair (typically generated with
ssh-keygen
)NOTE: This step is optional. This will be done automatically if usingoc create secret generic mycluster-ssh-key --from-file=ssh-privatekey=/path/to/private/key --from-file=ssh-publickey=/path/to/public/key
hiveutil create-cluster
with--ssh-public-key-file
and--ssh-private-key-file
arguments. - Create a Kubernetes secret containing your AWS credentials:
NOTE: This will be done automatically if using
oc create secret generic mycluster-aws-creds --from-literal=aws_secret_access_key=$AWS_SECRET_ACCESS_KEY --from-literal=aws_access_key_id=$AWS_ACCESS_KEY_ID
hiveutil create-cluster
. - Create a PersistentVolume for your
ClusterDeployment
to store the installation logs. TheaccessModes
isReadWriteOnce
for yourPersistentVolume
. Note that if you do not want to capture must-gather logs then you can set.spec.failedProvisionConfig.skipGatherLogs
totrue
in theHiveConfig
.
GlobalPullSecret can be used to specify a pull secret that will be used globally by all of the cluster deployments created by Hive. When GlobalPullSecret is defined in Hive namespace and a cluster deployment specific pull secret is specified, the registry authentication in both secrets will be merged and used by the new OpenShift cluster. When a registry exists in both pull secrets, precedence will be given to the contents of the cluster specific pull secret.
The global pull secret must live in the "hive" namespace, to create one:
oc create secret generic global-pull-secret --from-file=.dockerconfigjson=<path>/config.json --type=kubernetes.io/dockerconfigjson --namespace hive
Edit the HiveConfig to add global pull secret.
oc edit hiveconfig hive
The global pull secret name must be configured in the HiveConfig CR.
spec:
globalPullSecret:
name: global-pull-secret
Hive supports two methods of specifying what version of OpenShift you wish to install. Most commonly you can create a ClusterImageSet which references an OpenShift 4 release image.
An example ClusterImageSet:
apiVersion: hive.openshift.io/v1alpha1
kind: ClusterImageSet
metadata:
name: openshift-v4.2.0
spec:
releaseImage: quay.io/openshift-release-dev/ocp-release:4.2.0
Alternatively you can specify release image overrides directly on your ClusterDeployment when you create one.
Cluster provisioning begins when a caller creates a ClusterDeployment CR, which is the core Hive resource to control the lifecycle of a cluster.
An example ClusterDeployment:
apiVersion: hive.openshift.io/v1alpha1
kind: ClusterDeployment
metadata:
name: mycluster
spec:
baseDomain: hive.example.com
clusterName: mycluster
compute:
- name: worker
platform:
aws:
rootVolume:
iops: 100
size: 22
type: gp2
type: m4.large
replicas: 3
controlPlane:
name: master
platform:
aws:
rootVolume:
iops: 100
size: 22
type: gp2
type: m4.large
replicas: 3
imageSet:
name: openshift-v4.2.0
images:
installerImagePullPolicy: Always
networking:
clusterNetworks:
- cidr: 10.128.0.0/14
hostSubnetLength: 23
machineCIDR: 10.0.0.0/16
serviceCIDR: 172.30.0.0/16
type: OpenShiftSDN
platform:
aws:
region: us-east-1
platformSecrets:
aws:
credentials:
name: mycluster-aws-creds
pullSecret:
name: mycluster-pull-secret
sshKey:
name: mycluster-ssh-key
The hiveutil CLI (see make hiveutil
) offers a create-cluster command for generating a cluster deployment and submitting it to the Hive cluster using your current kubeconfig.
To view what create-cluster generates, without submitting it to the API server, add -o yaml
to the above command. If you need to make any changes not supported by create-cluster options, the output can be saved, edited, and then submitted with oc apply
.
By default this command assumes the latest Hive master CI build, and the latest OpenShift stable release. --release-image
can be specified to control which OpenShift release image to install in the cluster.
Credentials will be read from your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. Alternatively you can specify an AWS credentials file with --creds-file.
bin/hiveutil create-cluster --base-domain=mydomain.example.com --cloud=aws mycluster
Credentials will be read from ~/.azure/osServicePrincipal.json
typically created via the az login
command.
bin/hiveutil create-cluster --base-domain=mydomain.example.com --cloud=azure --azure-base-domain-resource-group-name=myresourcegroup --release-image=registry.svc.ci.openshift.org/origin/release:4.2 mycluster
--release-image
is used above as Azure installer support is only present in 4.2 dev preview builds.
Credentials will be read from ~/.gcp/osServiceAccount.json
, this can be created by:
- Login to GCP console at https://console.cloud.google.com/
- Create a service account with the owner role.
- Create a key for the service account.
- Select JSON for the key type.
- Download resulting JSON file and save to
~/.gcp/osServiceAccount.json
.
bin/hiveutil create-cluster --base-domain=mydomain.example.com --cloud=gcp --gcp-project-id=myproject --release-image=registry.svc.ci.openshift.org/origin/release:4.2 mycluster
--release-image
is used above as GCP installer support is only present in 4.2 dev preview builds.
- Get the namespace in which your cluster deployment was created
- Get the install pod name
oc get pods -o json --selector job-name==${CLUSTER_NAME}-install | jq -r '.items | .[].metadata.name'
- Run following command to watch the cluster deployment
Alternatively, you can watch the summarized output of the installer using
oc logs -f <install-pod-name> -c hive
oc exec -c hive <install-pod-name> -- tail -f /tmp/openshift-install-console.log
In the event of installation failures, please see Troubleshooting.
Once the cluster is provisioned you will see a CLUSTER_NAME-admin-kubeconfig secret. You can use this with:
oc get secret `oc get cd ${CLUSTER_NAME} -o jsonpath='{ .status.adminKubeconfigSecret.name }'` -o jsonpath='{ .data.kubeconfig }' | base64 --decode > ${CLUSTER_NAME}.kubeconfig
export KUBECONFIG=${CLUSTER_NAME}.kubeconfig
oc get nodes
-
Get the webconsole URL
oc get cd ${CLUSTER_NAME} -o jsonpath='{ .status.webConsoleURL }'
-
Retrive the password for
kubeadmin
useroc get secret `oc get cd ${CLUSTER_NAME} -o jsonpath='{ .status.adminPasswordSecret.name }'` -o jsonpath='{ .data.password }' | base64 --decode
Hive can optionally create delegated Route53 DNS zones for each cluster.
NOTE: This feature is not yet available for GCP and Azure clusters.
To use this feature:
- Manually create a Route53 hosted zone for your "root" domain (i.e. hive.example.com in the example below) and ensure your DNS is operational.
- Create a secret in the "hive" namespace with AWS credentials with permissions to manage the root hosted zone.
apiVersion: v1 data: aws_access_key_id: REDACTED aws_secret_access_key: REDACTED kind: Secret metadata: name: route53-aws-creds type: Opaque
- Update your HiveConfig to enable externalDNS and set the list of managed domains:
apiVersion: hive.openshift.io/v1alpha1 kind: HiveConfig metadata: name: hive spec: managedDomains: - hive.example.com externalDNS: aws: credentials: name: route53-aws-creds
You can now create clusters with manageDNS enabled and a basedomain of mydomain.hive.example.com.
bin/hiveutil create-cluster --base-domain=mydomain.hive.example.com mycluster --manage-dns
Hive will then:
- Create a mydomain.hive.example.com Route53 hosted zone.
- Create NS records in the hive.example.com to forward DNS to the new mydomain.hive.example.com hosted zone.
- Wait for the SOA record for the new domain to be resolvable, indicating that DNS is functioning.
- Launch the install, which will create DNS entries for the new cluster ("*.apps.mycluster.mydomain.hive.example.com", "api.mycluster.mydomain.hive.example.com", etc) in the new mydomain.hive.example.com hosted zone.
Hive offers two CRDs for applying configuration in a cluster once it is installed: SyncSet for config destined for specific clusters in a specific namespace, and SelectorSyncSet for config destined for any cluster matching a label selector.
For more information please see the SyncSet documentation.
Hive offers explicit API support for configuring identity providers in the OpenShift clusters it provisions. This is technically powered by the above SyncSet mechanism, but is provided directly in the API to support configuring per cluster identity providers, merged with global identity providers, all of which must land in the same object in the cluster.
For more information please see the SyncIdentityProvider documentation.
oc delete clusterdeployment ${CLUSTER_NAME} --wait=false
Deleting a ClusterDeployment will create a ClusterDeprovisionRequest resource, which in turn will launch a pod to attempt to delete all cloud resources created for and by the cluster. This is done by scanning the cloud provider for resources tagged with the cluster's generated InfraID. (i.e. kubernetes.io/cluster/mycluster-fcp4z=owned) Once all resources have been deleted the pod will terminate, finalizers will be removed, and the ClusterDeployment and dependent objects will be removed. The deprovision process is powered by vendoring the same code from the OpenShift installer used for openshift-install cluster destroy
.
The ClusterDeprovisionRequest resource can also be used to manually run a deprovision pod for clusters which no longer have a ClusterDeployment. (i.e. clusterDeployment.spec.preserveOnDelete=true)