Skip to content

Releases: CentaurusInfra/fornax

V0.3 Release

01 Feb 00:27
f00eddc
Compare
Choose a tag to compare

Edge-edge communication

This main focus of this release is implementing key component for edge-edge communication based on the POC from release v0.2. Edge-edge communication allows edge computing workloads (e.g. pods) from different edge clusters to communicate through virtual addressing. This capability is the foundation for future Fornax features such as edge computing storage and serverless platform. The implementation of VPC and subnet from the Mizar project have been extended to work across different physical clusters. There are 3 major components in this release:

Allow creating VPC with specified VNI
Configure gateway host through configmap
Configure remote ("virtual") subnets and select the gateway host as their virtual bouncers

Propagate virtual cluster info to Ebpf maps for transit XDP
Modify transit XDP on the edge gateway to match received packets with target subnet and divert traffic to user space if it belongs to a remote ("virtual") subnet
Disable direct path in the transit XDP if the packet comes from the edge gateway host

Listen to traffic in the user space on the gateway host
Convert received Geneve packet from the kernel space to inter-gateway packet
Convert received inter-gateway packet to Geneve packet
Communicate inter-gateway packets with other edge gateways

In addition to the above 3 components, a design draft has also been created for the next step features such as syncing edge gateway metadata. This will play a major roles in allowing distributed edge gateways to autonomously and efficiently adapt to network changes and vending inter-cluster network traffic.

Documentation and Automation

The following documentations have been added for team knowledge (onboarding and reference)

K8s build doc with Bash scripts
Remote SSH Debugging Setup in Visual Studio Code

V0.2 Release

04 Oct 23:38
7e5c65e
Compare
Choose a tag to compare

The main focus of this release is an enhancement to the Mission CRD to improve edge application deployment, and POC (proof-of-concept) prototyping for the key components of edge networking. In specific:

The enhancement to Mission CRD

The Mission CRD has been used to propagate workloads such as pod and deployment to selected edge clusters. In this release, the Mission CRD was extended to support commands. This feature is especially required to configure application on edge cluster remotely, e.g. from a higher level cluster in the edge cluster hierarchy.

To understand this feature and see it in action, in this demo video, we've showcased it with a face recognition app on the edge. When manually deploying this app in clusters, different (and quite complicated) procedures need to be taken in arktos/vanilla clusters, as documented in:

With the new Mission CRD features from this release, the deployment of such apps can be fully scripted. This allows applications to be provisioned automatically on the edge clusters which devops staff might not have access to. Newly connected edge clusters can also automatically receive and run such workloads.

Edge networking

This release also focused on prototyping key component for edge-edge communication where endpoints (e.g. pods) in different edge clusters that belong to the same VPC could communication edge to edge via VPC addresses. This work is based on and extends the Mizar project. The following are the components designed and tested in this release:

  • Control plane (Python)
    • Edge Gateway host selection where the gateway
      • Resides on a node of the cluster running Mizar operator (a “droplet” in the view of Mizar), and
      • Runs as "bouncers" for subnets that are in other edgeclusters.
    • Maintain the same VPC CNI when VPC spans across edge clusters
  • Data plane (XDP)
    • Disable short path between divider and gateway
    • Inter-gateway communication
    • Option 1: Pass packet to user space on gateway host
    • Option 2: Via XDP direct connect
  • Gateway user space program
    • Capture and decapsulate packet from XDP on gateway host

Release v0.1

01 Sep 00:13
1f5bf77
Compare
Choose a tag to compare

Welcome to Centaurus Edge!

This is the first release of the project.

The main focus of this release is an initial design of the Centaurus edge landscape and a solid implementation that supports the fundamental edge cluster scenarios, namely the hierarchical edge clusters. Detailed design can be found in the design doc.

The highlight features of this release include:

  • Scoping and feature designs for Centaurus edge, in specific:
    • Edge cluster with flexible flavors such as Arktos, K8s, K3s,
    • Workload delivery and status reporting and aggregation,
    • Hierarchical edge cluster architecture, and
    • Direct inter communications between edge clusters.
  • CRDs and operators for managing workloads such as pod and deployment in edge clusters.
  • A new management module that joins the KubeEdge components and supports:
    • Running edge clusters (such as K8s, Arktos, etc) in addition to standalone edge nodes.
    • Managing edge clusters and workloads from the upper level clusters using the provided Mission CRD encapsulation.
    • Allowing edge clusters and workloads to continue running during network disconnection with other clusters in the hierarchy.
    • Connecting edge clusters in a hierarchically distributed fashion.
  • Workload distribution (e.g. sending a deployment to a set of selected edge clusters) with retries for reliable delivery.
  • Workload and edge clusters status management and reporting.
    • Reporting edge cluster and workload health, status to upper level clusters.
    • Collecting and aggregating status from lower level clusters.

In addition, considering the high complexity of setting up distributed and interconnected edge clusters, a detailed setup plan and test plan are provided as validations, examples, and user guidance.