-
Notifications
You must be signed in to change notification settings - Fork 99
Add E2E test for Locality Load Balancing #1277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: ravjot07 <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: ravjot07 <[email protected]>
Signed-off-by: ravjot07 <[email protected]>
Codecov ReportAll modified and coverable lines are covered by tests ✅
see 2 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
test/e2e/locality_lb_test.go
Outdated
ports: | ||
- containerPort: 5000 | ||
nodeSelector: | ||
kubernetes.io/hostname: ambient-worker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only have two nodes in E2E cluster: kmesh-testing-control-plane
and kmesh-testing-worker
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add two worker nodes, otherwise it may need to set pod tolerations to allow schedule on master node
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only have two nodes in E2E cluster:
kmesh-testing-control-plane
andkmesh-testing-worker
.
Yup, i have updated it accordingly. I missed it initially.......
Signed-off-by: ravjot07 <[email protected]>
Signed-off-by: ravjot07 <[email protected]>
Signed-off-by: ravjot07 <[email protected]>
Signed-off-by: ravjot07 <[email protected]>
/retest |
|
||
// applyManifest writes the provided manifest into a temporary file and applies it using kubectl. | ||
func applyManifest(ns, manifest string) error { | ||
tmpFile, err := os.CreateTemp("", "manifest-*.yaml") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you change manifest-*.yaml
tmpFile.Close() | ||
return err | ||
} | ||
tmpFile.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move before L38, actually we can make both close and remove called within a defer function
} | ||
tmpFile.Close() | ||
|
||
cmd := "kubectl apply -n " + ns + " -f " + tmpFile.Name() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@YaoZengzeng please point which pkg to use instead of via kubectl
if err != nil || sleepPod == "" { | ||
t.Fatalf("Failed to get sleep pod: %v", err) | ||
} | ||
nslookup, _ := shell.Execute(true, "kubectl exec -n "+ns+" "+sleepPod+" -- nslookup "+fqdn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You donot need to nslookup the ip addr
return false | ||
} | ||
t.Logf("Curl output: %s", out) | ||
if strings.Contains(out, "region.zone1.subzone1") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is not right, here you return after checking only region.zone1.subzone1
response.
Overview
This PR adds an end-to-end integration test (
TestLocalityLoadBalancing
) for the Locality Load Balancing feature in Kmesh. The goal is to validate thePreferClose
traffic distribution policy by simulating a real-world scenario with services deployed in different zones. The test ensures that:Traffic from a local client is served by the closest available service instance (based on node locality).
When the closest instance becomes unavailable, traffic is gracefully routed to a remote instance.
Test Flow
Namespace Creation
A new namespace
sample
is created for the test resources.Service Deployment
The
helloworld
service is deployed with:trafficDistribution: PreferClose
Backend Deployments
Two deployments of the
helloworld
app are launched:Local Instance: On node
kmesh-testing-worker
, labeled asregion.zone1.subzone1
.Remote Instance: On node
kmesh-testing-control-plane
, labeled asregion.zone1.subzone2
.Each instance responds with its own
SERVICE_VERSION
.Sleep Client Deployment
A
curl
-based client (sleep
) is deployed onkmesh-testing-worker
to simulate client-side traffic origin.Verification
Resolution Check: Uses
nslookup
to resolve the FQDN of the service from within the client pod.Initial Routing: Makes curl requests to
helloworld.sample.svc.cluster.local
expecting responses from the local instance (region.zone1.subzone1
).Failover Simulation: Deletes the local deployment and retries the curl until a response is returned from the remote instance (
region.zone1.subzone2
).Cleanup
The test framework handles namespace and deployment cleanup automatically after test execution.
Contributes toward #1146