Skip to content

Commit

Permalink
Add multicluster example and bug fixes (#54)
Browse files Browse the repository at this point in the history
Fixes #28
  • Loading branch information
aattuluri authored Jan 24, 2020
1 parent 73888f7 commit 7b4e5fd
Show file tree
Hide file tree
Showing 18 changed files with 231 additions and 77 deletions.
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,7 @@ gen-yaml:
kustomize build ./install/admiral/overlays/demosinglecluster/ > ./out/yaml/demosinglecluster.yaml
kustomize build ./install/admiralremote/base/ > ./out/yaml/remotecluster.yaml
kustomize build ./install/sample/base/ > ./out/yaml/sample.yaml
kustomize build ./install/sample/overlays/remote > ./out/yaml/remotecluster_sample.yaml
cp ./install/sample/sample_dep.yaml ./out/yaml/sample_dep.yaml
cp ./install/scripts/cluster-secret.sh ./out/scripts/cluster-secret.sh
cp ./install/scripts/redirect-dns.sh ./out/scripts/redirect-dns.sh
150 changes: 114 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ Istio has a very robust set of multi-cluster capabilities. Managing this config

### Prerequisite

One or more k8s clusters.
One or more k8s clusters will need the following steps executed

**Example setup for a K8s cluster**
#### Install the below utilities

`Note`: If running in windows, a bash shell is required (cygwin)

Expand All @@ -25,24 +25,24 @@ One or more k8s clusters.
* Install [helm](https://github.com/helm/helm/blob/master/docs/install.md)
* Install [wget](https://www.gnu.org/software/wget/)

```
#Download & extract Istio
#### Install Istio

```
#Download
wget https://github.com/istio/istio/releases/download/1.3.3/istio-1.3.3-osx.tar.gz
wget https://github.com/istio/istio/releases/download/1.4.3/istio-1.4.3-osx.tar.gz
OR
wget https://github.com/istio/istio/releases/download/1.3.3/istio-1.3.3-linux.tar.gz
wget https://github.com/istio/istio/releases/download/1.4.3/istio-1.4.3-linux.tar.gz
OR
wget https://github.com/istio/istio/releases/download/1.3.3/istio-1.3.3-win.tar.gz
wget https://github.com/istio/istio/releases/download/1.4.3/istio-1.4.3-win.tar.gz
#Extract
tar -xf istio-1.3.3-osx.tar.gz
tar -xf istio-1.4.3-osx.tar.gz
OR
tar -xf istio-1.3.3-linux.tar.gz
tar -xf istio-1.4.3-linux.tar.gz
OR
tar -xf istio-1.3.3-win.tar.gz
tar -xf istio-1.4.3-win.tar.gz
```

```
Expand All @@ -54,15 +54,15 @@ kubectl create ns istio-system
#Create k8s secret to be used by Citadel for mTLS cert generation
kubectl create secret generic cacerts -n istio-system \
--from-file=istio-1.3.3/samples/certs/ca-cert.pem \
--from-file=istio-1.3.3/samples/certs/ca-key.pem \
--from-file=istio-1.3.3/samples/certs/root-cert.pem \
--from-file=istio-1.3.3/samples/certs/cert-chain.pem
--from-file=istio-1.4.3/samples/certs/ca-cert.pem \
--from-file=istio-1.4.3/samples/certs/ca-key.pem \
--from-file=istio-1.4.3/samples/certs/root-cert.pem \
--from-file=istio-1.4.3/samples/certs/cert-chain.pem
```
```
#Generate, install and verify Istio CRDs
helm template istio-1.3.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
helm template istio-1.4.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
#Make sure Istio crds are installed
Expand All @@ -71,35 +71,54 @@ kubectl get crds | grep 'istio.io' | wc -l
```
#Generate & Install Istio
helm template istio-1.3.3/install/kubernetes/helm/istio --name istio --namespace istio-system \
-f istio-1.3.3/install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml | kubectl apply -f -
helm template istio-1.4.3/install/kubernetes/helm/istio --name istio --namespace istio-system \
-f istio-1.4.3/install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml | kubectl apply -f -
#Verify that istio pods are up
kubectl get pods -n istio-system
```

#### DNS setup
In a k8s cluster, you will have a DNS component that would resolve names. Admiral generates names ending in global (Ex: `stage.greeting.global`) which can be resolved by istiocoredns (as its watching Istio ServiceEntries created by Admiral with those names) installed as part of Istio.
So you have to point DNS resolution for names ending in `global` to point to `ClusterIp` of istiocoredns service. The below step is to point coredns in a k8s cluster to istiocoredns. If you are using kube-dns, you can tweak this script.

```Note: The below script wipes out existing codedns config map, please manually edit it if you want to try this in a cluster with real services/traffic```

```
#Run the below script for having coredns point to istiocoredns for dns lookups of names ending in global
./admiral-install-v0.1-beta/scripts/redirect-dns.sh
```

#### Remove envoy cluster rewrite filter
Delete Istio's envoy filter for translating `global` to `svc.cluster.local` at istio-ingressgateway because we don't need that as Admiral generates Service Entries for cross cluster communication to just work!
```
# Delete envoy filter for translating `global` to `svc.cluster.local`
kubectl delete envoyfilter istio-multicluster-ingressgateway -n istio-system
```

`Reference:` [K8s cluster installed with Istio_replicated control planes](https://istio.io/docs/setup/install/multicluster/gateways/#deploy-the-istio-control-plane-in-each-cluster)


## Examples
## Example Installations & Demos

### Single cluster

#### Setup Admiral
#### Install/Run Admiral

```
#Download and extract admiral
wget https://github.com/istio-ecosystem/admiral/releases/download/v0.1-alpha/admiral-install-v0.1-alpha.tar.gz
tar xvf admiral-install-v0.1-alpha.tar.gz
wget https://github.com/istio-ecosystem/admiral/releases/download/v0.1-beta/admiral-install-v0.1-beta.tar.gz
tar xvf admiral-install-v0.1-beta.tar.gz
```

```
#Install admiral
kubectl apply -f ./admiral-install-v0.1-alpha/yaml/remotecluster.yaml
kubectl apply -f ./admiral-install-v0.1-alpha/yaml/demosinglecluster.yaml
kubectl apply -f ./admiral-install-v0.1-beta/yaml/remotecluster.yaml
kubectl apply -f ./admiral-install-v0.1-beta/yaml/demosinglecluster.yaml
#Verify admiral is running
Expand All @@ -110,42 +129,38 @@ kubectl get pods -n admiral
#Create the secret for admiral to monitor.
#Since this is for a single cluster demo the remote and local context are the same
./admiral-install-v0.1-alpha/scripts/cluster-secret.sh $KUBECONFIG $KUBECONFIG admiral
./admiral-install-v0.1-beta/scripts/cluster-secret.sh $KUBECONFIG $KUBECONFIG admiral
```
```
#Verify the secret
kubectl get secrets -n admiral
```
```
#Point hosts ending in global to be resolved by istio coredns

./admiral-install-v0.1-alpha/scripts/redirect-dns.sh
```
#### Setup Sample Apps
#### Deploy Sample Services

```
#Install test services
kubectl apply -f ./admiral-install-v0.1-alpha/yaml/sample.yaml
kubectl apply -f ./admiral-install-v0.1-beta/yaml/sample.yaml
```
```
#Install the dependency CR
#Install the dependency CR (this is optional)
kubectl apply -f ./admiral-install-v0.1-alpha/yaml/sample_dep.yaml
kubectl apply -f ./admiral-install-v0.1-beta/yaml/sample_dep.yaml
#Verify that admiral created service names for 'greeting' service
kubectl get serviceentry -n admiral-sync
```

#### Test
#### Demo

Now, run the command below that uses the CNAME generated by Admiral
```
kubectl exec --namespace=sample -it $(kubectl get pod -l "app=webapp" --namespace=sample -o jsonpath='{.items[0].metadata.name}') -c webapp -- curl -v http://default.greeting.global
```


#### Generated configuration

Admiral generated Istio configuration.
Expand Down Expand Up @@ -195,9 +210,72 @@ spec:
number: 80
protocol: http
resolution: DNS
```


### Multicluster

Finish steps from Single Cluster to have Admiral running and ready to watch other clusters (lets call them remote clusters) which we will be setting in the steps below.

Let's call the cluster used in Single cluster set up `Cluster 1`. Now we will use the steps below to add `Cluster 2` to the mesh and have it monitored by Admiral

Finish the steps from `Prerequisites` section for `Cluster 2`

#### Add Cluster 2 to Admiral's watcher
```
# Set CLUSTER_1 env variable
export CLUSTER_1=<path_to_kubeconfig_for_cluster_1>
# Set CLUSTER_2 env variable
export CLUSTER_2=<path_to_kubeconfig_for_cluster_2>
```

```
# Switch kubectx to Cluster 2
export KUBECONFIG=$CLUSTER_2
# Create admiral role and bindings on Cluster 2
kubectl apply -f ./admiral-install-v0.1-beta/yaml/remotecluster.yaml
```

```
#Switch kubectx to Cluster 1
export KUBECONFIG=$CLUSTER_1
# Create the k8s secret for admiral to monitor Cluster 2.
./admiral-install-v0.1-beta/scripts/cluster-secret.sh $CLUSTER_1 $CLUSTER_2 admiral
```

At this point, admiral is watching `Cluster 2`

#### Deploy Sample Services in Cluster 2
```
#Switch kubectx to Cluster 2
export KUBECONFIG=$CLUSTER_2
#Install test services in Cluster 2
kubectl apply -f ./admiral-install-v0.1-beta/yaml/remotecluster_sample.yaml
```

#### Verify

```
#Switch kubectx to Cluster 1
export KUBECONFIG=$CLUSTER_1
# Verify that the ServiceEntry for greeting service in Cluster 1 now has second endpoint (Cluster 2's istio-ingressgateway address)
kubectl get serviceentry default.greeting.global-se -n admiral-sync -o yaml
```

#### Demo

Now run the below request multiple times and see the requests being load balanced between local (Cluster 1) and remote (Cluster 2) instances of greeting service (You can see the response payload change based on which greeting's instance served the request)

```
kubectl exec --namespace=sample -it $(kubectl get pod -l "app=webapp" --namespace=sample -o jsonpath='{.items[0].metadata.name}') -c webapp -- curl -v http://default.greeting.global
```


## Admiral Architecture

![alt text](https://user-images.githubusercontent.com/35096265/65183155-b8244b00-da17-11e9-9f2d-cce5a96fe2e8.png "Admiral Architecture")
Expand Down
17 changes: 10 additions & 7 deletions admiral/pkg/clusters/registry.go
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ func handleDependencyRecord(identifier string, sourceIdentity string, admiralCac
}
//TODO pass deployment

tmpSe := createServiceEntry(identifier, rc, config, admiralCache, deployment[0], serviceEntries)
tmpSe := createServiceEntry(rc, config, admiralCache, deployment[0], serviceEntries)

if tmpSe == nil {
continue
Expand Down Expand Up @@ -135,15 +135,18 @@ func getDestinationRule(host string) *networking.DestinationRule {
TrafficPolicy: &networking.TrafficPolicy{Tls: &networking.TLSSettings{Mode: networking.TLSSettings_ISTIO_MUTUAL}}}
}

func getServiceForDeployment(rc *RemoteController, deployment *k8sAppsV1.Deployment, namespace string) *k8sV1.Service {
func getServiceForDeployment(rc *RemoteController, deployment *k8sAppsV1.Deployment) *k8sV1.Service {

cachedService := rc.ServiceController.Cache.Get(namespace)
if deployment == nil {
return nil
}
cachedService := rc.ServiceController.Cache.Get(deployment.Namespace)

if cachedService == nil {
return nil
}
var matchedService *k8sV1.Service
for _, service := range cachedService.Service[namespace] {
for _, service := range cachedService.Service[deployment.Namespace] {
var match = true
for lkey, lvalue := range service.Spec.Selector {
value, ok := deployment.Spec.Selector.MatchLabels[lkey]
Expand Down Expand Up @@ -282,14 +285,14 @@ func (r *RemoteRegistry) createCacheController(clientConfig *rest.Config, cluste
}

log.Infof("starting deployment controller clusterID: %v", clusterID)
rc.DeploymentController, err = admiral.NewDeploymentController(stop, &DeploymentHandler{RemoteRegistry: r}, clientConfig, resyncPeriod)
rc.DeploymentController, err = admiral.NewDeploymentController(stop, &DeploymentHandler{RemoteRegistry: r}, clientConfig, resyncPeriod, r.config.LabelSet)

if err != nil {
return fmt.Errorf(" Error with DeploymentController controller init: %v", err)
}

log.Infof("starting pod controller clusterID: %v", clusterID)
rc.PodController, err = admiral.NewPodController(stop, &PodHandler{RemoteRegistry: r}, clientConfig, resyncPeriod)
rc.PodController, err = admiral.NewPodController(stop, &PodHandler{RemoteRegistry: r}, clientConfig, resyncPeriod, r.config.LabelSet)

if err != nil {
return fmt.Errorf(" Error with PodController controller init: %v", err)
Expand Down Expand Up @@ -531,7 +534,7 @@ func createDestinationRuleForLocal(remoteController *RemoteController, localDrNa
break
}

serviceInstance := getServiceForDeployment(remoteController, deploymentInstance, deploymentInstance.Namespace)
serviceInstance := getServiceForDeployment(remoteController, deploymentInstance)

cname := common.GetCname(deploymentInstance, identifier, nameSuffix)
if cname == destinationRule.Host {
Expand Down
14 changes: 5 additions & 9 deletions admiral/pkg/clusters/registry_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
depModel "github.com/istio-ecosystem/admiral/admiral/pkg/apis/admiral/model"
"github.com/istio-ecosystem/admiral/admiral/pkg/apis/admiral/v1"
"github.com/istio-ecosystem/admiral/admiral/pkg/controller/admiral"
"github.com/istio-ecosystem/admiral/admiral/pkg/controller/common"
"github.com/istio-ecosystem/admiral/admiral/pkg/controller/istio"
"github.com/istio-ecosystem/admiral/admiral/pkg/test"
networking "istio.io/api/networking/v1alpha3"
Expand Down Expand Up @@ -120,7 +121,7 @@ func TestCreateDestinationRuleForLocalNoDeployLabel(t *testing.T) {
Host: "localhost",
}

d, e := admiral.NewDeploymentController(make(chan struct{}), &test.MockDeploymentHandler{}, &config, time.Second*time.Duration(300))
d, e := admiral.NewDeploymentController(make(chan struct{}), &test.MockDeploymentHandler{}, &config, time.Second*time.Duration(300), &common.LabelSet{})

if e != nil {
t.Fail()
Expand Down Expand Up @@ -177,7 +178,7 @@ func createMockRemoteController(f func(interface{})) (*RemoteController, error)
Host: "localhost",
}
stop := make(chan struct{})
d, e := admiral.NewDeploymentController(stop, &test.MockDeploymentHandler{}, &config, time.Second*time.Duration(300))
d, e := admiral.NewDeploymentController(stop, &test.MockDeploymentHandler{}, &config, time.Second*time.Duration(300), &common.LabelSet{})
s, e := admiral.NewServiceController(stop, &test.MockServiceHandler{}, &config, time.Second*time.Duration(300))
n, e := admiral.NewNodeController(stop, &test.MockNodeHandler{}, &config)

Expand Down Expand Up @@ -462,44 +463,39 @@ func TestGetServiceForDeployment(t *testing.T) {
deploymentWithSelector := k8sAppsV1.Deployment{}
deploymentWithSelector.Name = "dep2"
deploymentWithSelector.Namespace = "under-test"
deploymentWithSelector.Spec.Selector = &metav1.LabelSelector{}
deploymentWithSelector.Spec.Selector.MatchLabels = map[string]string{"under-test":"true"}
deploymentWithSelector.Spec.Selector = &metav1.LabelSelector{MatchLabels: map[string]string{"under-test":"true"}}

//Struct of test case info. Name is required.
testCases := []struct {
name string
controller *RemoteController
deployment *k8sAppsV1.Deployment
namespace string
expectedService *k8sCoreV1.Service
}{
{
name: "Should return nil with nothing in the cache",
controller:baseRc,
deployment:nil,
namespace:"foobar",
expectedService:nil,
},
{
name: "Should not match if selectors don't match",
controller:rcWithService,
deployment:&deploymentWithNoSelector,
namespace:"under-test",
expectedService:nil,
},
{
name: "Should return proper service",
controller:rcWithService,
deployment:&deploymentWithSelector,
namespace:"under-test",
expectedService:&service,
},
}

//Run the test for every provided case
for _, c := range testCases {
t.Run(c.name, func(t *testing.T) {
resultingService := getServiceForDeployment(c.controller, c.deployment, c.namespace)
resultingService := getServiceForDeployment(c.controller, c.deployment)
if resultingService == nil && c.expectedService == nil {
//perfect
} else {
Expand Down
Loading

0 comments on commit 7b4e5fd

Please sign in to comment.