diff --git a/sandbox/Sandbox13/configurations.rst b/sandbox/Sandbox13/configurations.rst index f8aa5b37cc..7a4f5bd15c 100644 --- a/sandbox/Sandbox13/configurations.rst +++ b/sandbox/Sandbox13/configurations.rst @@ -3,48 +3,56 @@ ******************************** Provided Example Configurations ******************************** + +.. contents:: + :local: + Once you log into the Netris Controller, you will find that certain services have already been pre-configured for you to explore and interact with. You can also learn how to create some of these services yourself by following the step-by-step instructions in the :ref:`"Learn by Creating Services"` document. V-Net (Ethernet/Vlan/VXlan) Example =================================== -After logging into the Netris Controller by visiting `https://sandbox13.netris.io `_ and navigating to **Services → V-Net**, you will find a V-Net service named "**vnet-example**" already configured for you as an example. +To access the V-Net service example, first log into the Netris Controller by visiting `https://Sandbox13.netris.io `_ and navigating to **Services → V-Net**, where you will find a pre-configured V-Net service named "**vnet-example**" available as an example. -If you examine the particular service settings ( select **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**vnet-example**" service), you will find that the services is configured on the second port of **switch 21** named "**swp2(swp2)@sw21-nyc (Admin)**". +To examine the service settings, select **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**vnet-example**" service, where you'll see that the V-Net service is configured with VLAN ID **45** in order to enable **EVPN Multihoming** on the underlying switches. -The V-Net servicers is also configured with both an IPv4 and IPv6 gateway, **192.168.45.1** (from the "**192.168.45.0/24 (EXAMPLE)**" subnet) and **2607:f358:11:ffcd::1** (from the "**2607:f358:11:ffcd::/64 (EXAMPLE IPv6)**" subnet) respectively. +You'll also see that the V-Net service is configured with both an IPv4 gateway (**192.168.45.1**) from the "**192.168.45.0/24 (EXAMPLE)**" subnet and an IPv6 gateway (**2607:f358:11:ffcd::1**) from the "**2607:f358:11:ffcd::/64 (EXAMPLE IPv6)**" subnet. + +Additionally, the V-Net service is configured to utilize network interfaces on both switches 21 and 22. Specifically, it is connected to **swp4(swp4)@sw21-nyc (Admin)** on switch 21 and **swp4(swp4)@sw22-nyc (Admin)** on switch 22. You may also verify that the service is working properly from within the GUI: (*\*Fields not specified should remain unchanged and retain default values*) -1. Navigate to **Net → Looking Glass**. -2. Select switch "**sw21-nyc(10.254.46.21)**" (the switch the "**vnet-example**" service is configured on) from the **Select device** drop-down menu. -3. Select "**Ping**" from the **Command** drop-down menu. -4. Type ``192.168.45.64`` (the IP address of **srv04-nyc** connected to **swp2@sw21-nyc**) in the field labeled **IPv4 address**. -5. Click **Submit**. +1. Navigate to **Network → Looking Glass**. +2. Make sure "**vpc-1:Default**" is selected from the **VPC** drop-down menu. +3. Select "**SoftGate1(45.38.161.144)**" from the **Hardware** drop-down menu. +4. Leave the "**Family: IPV4**" as the selected choice from the **Adrress Family** drop-down menu. +5. Select "**Ping**" from the **Command** drop-down menu. +6. Leave the "**Selecet IP address**" as the selected choice from the **Source** drop-down menu. +7. Type ``192.168.45.64`` (the IP address configured on **bond0.45** on **srv04-nyc**) in the field labeled **IPv4 address**. +8. Click **Submit**. -The result should look similar to the output below, indicating that the communication between switch **sw21-nyc** and server **srv04-nyc** is working properly thanks to the configured V-Net service. +The result should look similar to the output below, indicating that the communication between SoftGate **SoftGate1** and server **srv04-nyc** is working properly thanks to the configured V-Net service. .. code-block:: shell-session - sw21-nyc# ip vrf exec Vrf_netris ping -c 5 192.168.45.64 + SoftGate1# ping -c 5 192.168.45.64 PING 192.168.45.64 (192.168.45.64) 56(84) bytes of data. - 64 bytes from 192.168.45.64: icmp_seq=1 ttl=64 time=0.562 ms - 64 bytes from 192.168.45.64: icmp_seq=2 ttl=64 time=0.745 ms - 64 bytes from 192.168.45.64: icmp_seq=3 ttl=64 time=0.690 ms - 64 bytes from 192.168.45.64: icmp_seq=4 ttl=64 time=0.737 ms - 64 bytes from 192.168.45.64: icmp_seq=5 ttl=64 time=0.666 ms - + 64 bytes from 192.168.45.64: icmp_seq=1 ttl=61 time=6.29 ms + 64 bytes from 192.168.45.64: icmp_seq=2 ttl=61 time=5.10 ms + 64 bytes from 192.168.45.64: icmp_seq=3 ttl=61 time=4.82 ms + 64 bytes from 192.168.45.64: icmp_seq=4 ttl=61 time=4.82 ms + 64 bytes from 192.168.45.64: icmp_seq=5 ttl=61 time=4.79 ms --- 192.168.45.64 ping statistics --- - 5 packets transmitted, 5 received, 0% packet loss, time 4092ms - rtt min/avg/max/mdev = 0.562/0.680/0.745/0.065 ms + 5 packets transmitted, 5 received, 0% packet loss, time 4002ms + rtt min/avg/max/mdev = 4.787/5.161/6.285/0.572 ms If you are interested in learning how to create a V-Net service yourself, please refer to the step-by-step instructions found in the :ref:`"V-Net (Ethernet/Vlan/VXlan)"` section of the :ref:`"Learn by Creating Services"` document. -More details about V-Net (Ethernet/Vlan/VXlan) can be found on the the :ref:`"V-NET"` page. +More details about V-Net (Ethernet/Vlan/VXlan) can be found on the the :ref:`V-Net"` page. E-BGP (Exterior Border Gateway Protocol) Example ================================================ -Navigate to **Net → E-BGP**. Here, aside from the necessary system generated IPv4/IPv6 E-BGP peer connections between the two border routers ( **SoftGate1** & **SoftGate2** ) and the rest of the switching fabric (which can be toggled on/off using the **Show System Generated** toggle at the top of the page), you will also find two E-BGP sessions named "**iris-isp1-ipv4-example**" and "**iris-isp1-ipv6-example**" configured as example with **IRIS ISP1**. This ensures communication between the internal network with the Internet. +Navigate to **Network → E-BGP**. Here, aside from the required system generated IPv4/IPv6 E-BGP peer connections between the two border routers ( **SoftGate1** & **SoftGate2** ) and the rest of the switching fabric (which can be toggled on/off using the **Show System Generated** toggle at the top of the page), you will also find two E-BGP sessions named "**iris-isp1-ipv4-example**" and "**iris-isp1-ipv6-example**" configured as examples with **IRIS ISP1**. This ensures communication between the internal network and the Internet. You may examine the particular session configurations of the E-BGP connections by selecting **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of either the "**iris-isp1-ipv4-example**" and "**iris-isp1-ipv6-example**" connections. You may also expand the **Advanced** section located toward the bottom of the **Edit** window to be able to access the more advanced settings available while configuring an E-BGP session. @@ -54,7 +62,7 @@ More details about E-BGP (Exterior Border Gateway Protocol) can be found on the NAT (Network Address Translation) Example ========================================= -Navigate to **Net → NAT** and you will find a NAT rule named "**NAT Example**" configured as an example for you. The configured "**SNAT**" rule ensures that there can be communication between the the private "**192.168.45.0/24 (EXAMPLE)**" subnet and the Internet. +Navigate to **Network → NAT** and you will find a NAT rule named "**NAT Example**" configured as an example for you. The configured "**SNAT**" rule ensures that there can be communication between the the private "**192.168.45.0/24 (EXAMPLE)**" subnet and the Internet. You can examine the particular settings of the NAT rule by clicking **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**NAT Example**" service. @@ -74,10 +82,10 @@ More details about NAT (Network Address Translation) can be found on the :ref:`" ACL (Access Control List) Example ================================= -Navigate to **Services → ACL** and you will find an ACL services named "**V-Net Example to WAN**" set up as an example for you. This particular ACL ensures that the connectivity between the the private "**192.168.45.0/24 (EXAMPLE)**" subnet and the Internet is permitted through all protocols and ports, even in a scenario where the the "**ACL Default Policy**" for the "**US/NYC**" site configured under **Net → Sites** in our Sandbox is changed from **Permit** to **Deny**. +Navigate to **Services → ACL** and you will find an ACL services named "**V-Net Example to WAN**" set up as an example for you. This particular ACL ensures that the connectivity between the the private "**192.168.45.0/24 (EXAMPLE)**" subnet and the Internet is permitted through all protocols and ports, even in a scenario where the the "**ACL Default Policy**" for the "**US/NYC**" site configured under **Network → Sites** in our Sandbox is changed from **Permit** to **Deny**. You can examine the particular settings of this ACL policy by selecting **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**V-Net Example to WAN**" ACL policy. By utilizing ACLs, you can impose granular controls and implement policies that would permit or deny particular connections of any complexity. If you are interested in learning how to create ACL policies yourself, please refer to the step-by-step instructions found in the :ref:`"ACL (Access Control List)"` section of the :ref:`"Learn by Creating Services"` document. -More details about ACL (Access Control List) can be found on the :ref:`"ACL"` page. \ No newline at end of file +More details about ACL (Access Control List) can be found on the :ref:`"ACL"` page. diff --git a/sandbox/Sandbox13/creating-services.rst b/sandbox/Sandbox13/creating-services.rst index 001fd9aca0..93ac9a1515 100644 --- a/sandbox/Sandbox13/creating-services.rst +++ b/sandbox/Sandbox13/creating-services.rst @@ -4,6 +4,9 @@ Learn by Creating Services ************************** +.. contents:: + :local: + Following these short exercises we will be able to demonstrate how the :ref:`Netris Controller`, in conjunction with the :ref:`Netris Agents` deployed on the switches and SoftGates, is able to intelligently and automagically deploy the necessary configurations across the network fabric to provision the desired services within a matter of seconds. .. _s13-v-net: @@ -23,23 +26,24 @@ Let's create a V-Net service to give server **srv05-nyc** the ability to reach i * In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - 1. Log into the Netris Controller by visiting `https://sandbox13.netris.io `_ and navigate to **Services → V-Net**. + 1. Log into the Netris Controller by visiting `https://Sandbox13.netris.io `_ and navigate to **Services → V-Net**. 2. Click the **+ Add** button in the top right corner of the page to get started with creating a new V-Net service. 3. Define a name in the **Name** field (e.g. ``vnet-customer``). 4. From the **Sites** drop-down menu, select "**US/NYC**". - 5. From the **Owner** drop-down menu, select "**Demo**". - 6. From the **IPv4 Gateway** drop-down menu, select the "**192.168.46.0/24(CUSTOMER)**" subnet. - 7. The first available IP address "**192.168.46.1**" is automatically selected in the second drop-down menu of the list of IP addresses. This matches the results of the ``ip route ls`` command output on **srv05-nyc** we observed earlier. - 8. From the **Add Network Interface** drop-down menu put a check mark next to switch port "**swp2(swp2 | srv05-nyc)@sw22-nyc (Demo)**", which we can see is the the port where **srv05-nyc** is wired into when we reference the :ref:`"Sandbox Topology diagram"`. + 5. From the **VLAN ID** drop-down menu, select "**Enter manually**" and type in "**46**" in the field to the right. + 6. From the **Owner** drop-down menu, select "**Demo**". + 7. From the **IPv4 Gateway** drop-down menu, select the "**192.168.46.0/24(CUSTOMER)**" subnet. + 8. The first available IP address "**192.168.46.1**" is automatically selected in the second drop-down menu of the list of IP addresses. This matches the results of the ``ip route ls`` command output on **srv05-nyc** we observed earlier. + 9. From the **Add Network Interface** drop-down menu put a check mark next to both network interfaces "**swp5(swp5 | srv05-nyc)@sw12-nyc (Demo)**" and "**swp5(swp5 | srv05-nyc)@sw21-nyc (Demo)**", which we can see are the interfaces where **srv05-nyc** is wired into when we reference the :ref:`"Sandbox Topology diagram"`. - * The drop-down menu only contains this single switch port as it is the only port that has been assigned to the **Demo** tenant. + * The drop-down menu only contains these two network interfaces as they are the only interfaces that have been assigned to the **Demo** tenant. - 9. Check the **Untag** check-box and click the **Add** button. - 10. Click the **Add** button at the bottom right of the "**Add new V-Net**" window and the service will start provisioning. + 10. Click the **Add** button. + 11. Click the **Add** button at the bottom right of the "**Add new V-Net**" window and the service will start provisioning. -After just a few seconds, once fully provisioned, you will start seeing successful ping replies, similar in form to "**64 bytes from 192.168.46.1: icmp_seq=55 ttl=64 time=1.66 ms**", to the ping that was previously started in the terminal window, indicating that now the gateway address is reachable from host **srv05-nyc**. +After just a few seconds, once fully provisioned, you will start seeing successful ping replies, similar in form to "**64 bytes from 192.168.46.1: icmp_seq=55 ttl=64 time=1.66 ms**", to the ping that was previously started in the terminal window, indicating that now the gateway address is accessible from host **srv05-nyc**. -More details about V-Net (Ethernet/Vlan/VXlan) can be found on the the :ref:`"V-NET"` page. +More details about V-Net (Ethernet/Vlan/VXlan) can be found on the the :ref:`"V-Network"` page. .. _s13-e-bgp: @@ -51,7 +55,7 @@ Optionally you can configure an E-BGP session to IRIS ISP2 for fault tolerance. * In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - 1. Log into the Netris Controller by visiting `https://sandbox13.netris.io `_ and navigate to **Net → E-BGP**. + 1. Log into the Netris Controller by visiting `https://Sandbox13.netris.io `_ and navigate to **Network → E-BGP**. 2. Click the **+ Add** button in the top right corner of the page to configure a new E-BGP session. 3. Define a name in the **Name** field (e.g. ``iris-isp2-ipv4-customer``). 4. From the **Site** drop-down menu, select "**US/NYC**". @@ -60,19 +64,19 @@ Optionally you can configure an E-BGP session to IRIS ISP2 for fault tolerance. * For the purposes of this exercise, the required switch port can easily be found by typing ``ISP2`` in the Search field. - 7. For the **VLAN ID** field, uncheck the **Untag** check-box and type in ``1132``. + 7. For the **VLAN ID** field, type in ``1132`` while leaving the **Untagged** check-box unchecked. 8. In the **Neighbor AS** field, type in ``65007``. 9. In the **Local IP** field, type in ``45.38.161.166``. 10. In the **Remote IP** field, type in ``45.38.161.165``. - 11. Expand the **Advanced** section - 12. In the **Prefix List Inbound** field, type in ``permit 0.0.0.0/0`` - 13. In the **Prefix List Outbound** field, type in ``permit 45.38.161.144/28 le 32`` - 14. And finally click **Add** + 11. Expand the **Advanced** section + 12. In the **Prefix List Outbound** field, type in ``permit 45.38.161.144/28 le 32`` + 13. And finally click **Add** -Allow up to 1 minute for both sides of the BGP sessions to come up and then the BGP state on **Net → E-BGP** page as well as on **Telescope → Dashboard** pages will turn green, indication a successfully established BGP session. We can glean further insight into the BGP session details by navigating to **Net → Looking Glass**. +Allow up to 1 minute for both sides of the BGP sessions to come up and then the BGP state on **Network → E-BGP** page as well as on **Telescope → Dashboard** pages will turn green, indication a successfully established BGP session. We can glean further insight into the BGP session details by navigating to **Net → Looking Glass**. - 1. Select "**SoftGate2(45.38.161.145)**" (the border router where our newly created BGP session is terminated on) from the **Select device** drop-down menu. - 2. Leaving the **Family** drop-down menu on IPv4 and the **Command** drop-down menu on "**BGP Summary**", click on the **Submit** button. + 1. Make sure "**vpc-1:Default**" is selected from the **VPC** drop-down menu. + 2. Select "**SoftGate2(45.38.161.145)**" (the border router where our newly created BGP session is terminated on) from the **Hardware** drop-down menu. + 3. Leaving the **Address Family** drop-down menu on "**Family: IPV4**" and the **Command** drop-down menu on "**Command: BGP Summary**", click on the **Submit** button. We are presented with the summary of the BGP sessions terminated on **SoftGate2**. You can also click on each BGP neighbor name to further see the "**Advertised routes**" and "**Routes**" received to/from that BGP neighbor. @@ -91,23 +95,23 @@ Now that we have both internal and external facing services, we can aim for our 3. Start a ping session towards any public IP address (e.g. ``ping 1.1.1.1``). 4. Keep the ping running as an indicator for when the service starts to work. -Let's configure a source NAT so our Customer subnet **192.168.46.0/24**, which is used in the V-Net services called **vnet-customer**, can communicate with the Internet. +Let's configure a Source NAT so our Customer subnet **192.168.46.0/24**, which is used in the V-Net services called **vnet-customer**, can communicate with the Internet. * In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - 1. Log into the Netris Controller by visiting `https://sandbox13.netris.io `_ and navigate to **Net → NAT**. + 1. Log into the Netris Controller by visiting `https://Sandbox13.netris.io `_ and navigate to **Network → NAT**. 2. Click the **+ Add** button in the top right corner of the page to define a new NAT rule. 3. Define a name in the **Name** field (e.g. ``NAT Customer``). 4. From the **Site** drop-down menu, select "**US/NYC**". 5. From the **Action** drop-down menu, select "**SNAT**". - 6. From the **Protocol** drop-down menu, select "**ALL**". + 6. Leave **ALL** selected in the **Protocol** drop-down menu. 7. In the **Source Address** field, type in ``192.168.46.0/24``. - 8. In the **Destination Address** field, type in ``0.0.0.0/0``. + 8. In the **Destination Address** field, leave the default value of ``0.0.0.0/0``. 9. Toggle the switch from **SNAT to Pool** to **SNAT to IP**. 10. From the **Select subnet** drop-down menu, select the "**45.38.161.148/30 (NAT)**" subnet. 11. From the **Select IP** drop-down menu, select the "**45.38.161.148/32**" IP address. - * This public IP is part of **45.38.161.148/30 (NAT)** subnet which is configured in the **NET → IPAM** section with the purpose of **NAT** and indicated in the SoftGate configurations to be used as a global IP for NAT by the :ref:`"Netris SoftGate Agent"`.. + * This public IP address is part of **45.38.161.148/30 (NAT)** subnet which is configured in the **Network → IPAM** section with the purpose of **NAT** and indicated in the **SoftGate** configurations to be used as a global IP for NAT by the :ref:`"Netris SoftGate Agent"`. 12. Click **Add** @@ -117,13 +121,13 @@ More details about NAT (Network Address Translation) can be found on the :ref:`" .. _s13-l3lb: -L3LB (Anycast L3 load balancer) +L3LB (Anycast L3 Load Balancer) =============================== -In this exercise we will quickly configure an Anycast IP address in the Netris Controller for two of our :ref:`"ROH (Routing on the Host)"` servers (**srv01-nyc** & **srv02-nyc**) which both have a running Web Server configured to display a simple HTML webpage and observe **ECMP** load balancing it in action. +In this exercise we will quickly configure an Anycast IP address in the Netris Controller for two of our :ref:`"ROH (Routing on the Host)"` servers (**srv01-nyc** & **srv02-nyc**) which both have a running **Web Server** configured to display a simple HTML webpage and observe **ECMP** load balancing it in action. * In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - 1. Log into the Netris Controller by visiting `https://sandbox13.netris.io `_ and navigate to **Services → ROH**. + 1. Log into the Netris Controller by visiting `https://Sandbox13.netris.io `_ and navigate to **Services → ROH**. 2. Click **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**srv01-nyc**" server. 3. From the **IPv4** drop-down menu, select the "**45.38.161.152/30 (L3 LOAD BALANCER)**" subnet. 4. From the second drop-down menu that appears to the right, select the first available IP "**45.38.161.152**". @@ -141,12 +145,14 @@ In this exercise we will quickly configure an Anycast IP address in the Netris C .. image:: /images/l3lb_srv01.png :align: center + :alt: SRV01 L3LB + :target: ../../_images/l3lb_srv01.png -In order to trigger the L3 load balancer to switch directing the traffic towards the other backend server (in this case from **srv01-nyc** to **srv02-nyc**, which based on the unique hash in your situation could be the other way around), we can simulate the unavailability of backend server we ended up on by putting it in **Maintenance** mode. +In order to trigger the L3 load balancer to switch directing the traffic towards the other backend server (in this case from **srv01-nyc** to **srv02-nyc**, which based on the unique hash in your situation could be the other way around), we can simulate the unavailability of the backend server we ended up on by putting it in **Maintenance** mode. * Back in the Netris Controller, navigate to **Services → L3 Load Balancer**. - 1. Expand the **LB Vip** that was created when we defined the **Anycast** IP address earlier by clicking on the **>** to the left of "**45.38.161.152 (name_45.38.161.152)**". + 1. Expand the **LB Vip** that was created when we defined the **Anycast** IP address earlier by clicking on the **>** button to the left of "**45.38.161.152 (name_45.38.161.152)**". 2. Click **Action v** to the right of the server you originally ended up on (in this case **srv01-nyc**). 3. Click **Maintenance on**. 4. Click **Maintenance** one more time in the pop-up window. @@ -157,6 +163,8 @@ In order to trigger the L3 load balancer to switch directing the traffic towards .. image:: /images/l3lb_srv02.png :align: center + :alt: SRV02 L3LB + :target: ../../_images/l3lb_srv02.png More details about AL3LB (Anycast L3 load balancer) can be found on the :ref:`"L3 Load Balancer (Anycast LB)"` page. @@ -176,12 +184,12 @@ Now that **srv05-nyc** can communicate with both internal and external hosts, le * In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - 1. Log into the Netris Controller by visiting `https://sandbox13.netris.io `_ and navigate to **Net → Sites**. + 1. Log into the Netris Controller by visiting `https://Sandbox13.netris.io `_ and navigate to **Network → Sites**. 2. Click **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the **UC/NYC** site. 3. From the **ACL Default Policy** drop-down menu, change the value from "**Permit**" to "**Deny**". 4. Click **Save**. -Soon you will notice that there are no new replies to our previously started ``ping 1.1.1.1`` command in the terminal window, indicating that the **1.1.1.1** IP address is no longer reachable.Now that the **Default ACL Policy** is set to **Deny**, we need to configure an **ACL** entry that will allow the **srv05-nyc** server to communicate with the Internet. +Soon you will notice that there are no new replies to our previously started ``ping 1.1.1.1`` command in the terminal window, indicating that the **1.1.1.1** IP address is no longer reachable. Now that the **Default ACL Policy** is set to **Deny**, we need to configure an **ACL** entry that will allow the **srv05-nyc** server to communicate with the Internet. * Back in the web browser: (*\*Fields not specified should remain unchanged and retain default values*) @@ -192,9 +200,7 @@ Soon you will notice that there are no new replies to our previously started ``p 5. In the Source field, type in ``192.168.46.0/24``. 6. In the Destination field, type in ``0.0.0.0/0``. 7. Click **Add**. - 8. Select **Approve** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the newly created "**V-Net Customer to WAN**" ACL. - 9. Click **Approve** one more time in the pop-up window. -Once the Netris Controller has finished syncing the new ACL policy with all member devices, we can see in the terminal window that replies to our ``ping 1.1.1.1`` command have resumed, indicating that the **srv05-nyc** server can communicate with the Internet once again.. +You can observer the status of the syncing process by clicking on the **Syncing** yellow label at the top right of the **ACL** windown. Once the Netris Controller has finished syncing the new ACL policy with all relevant member devices, the label will turn green and read as **Synced**. Back in the terminal window we can observer that the replies to our ``ping 1.1.1.1`` command have resumed, indicating that the **srv05-nyc** server can communicate with the Internet once again.. More details about ACL (Access Control List) can be found on the :ref:`"ACL"` page. diff --git a/sandbox/Sandbox13/onprem-k8s.rst b/sandbox/Sandbox13/onprem-k8s.rst index 0982f397dd..84d51c36c3 100644 --- a/sandbox/Sandbox13/onprem-k8s.rst +++ b/sandbox/Sandbox13/onprem-k8s.rst @@ -1,7 +1,7 @@ .. _s13-k8s: *************************************** -Learn Netris operations with Kubernetes +Learn Netris Operations with Kubernetes *************************************** .. contents:: @@ -9,29 +9,33 @@ Learn Netris operations with Kubernetes Intro ===== -This Sandbox environment provides an existing Kubernetes cluster that has been deployed via `Kubespray `_. For this scenario, we will be using the `external LB `_ option in Kubespray. A dedicated Netris L4LB service has been created in the Sandbox Controller to access the k8s apiservers from users and non-master nodes sides. +The Sandbox environment offers a pre-existing 3 node Kubernetes cluster deployed through K3S in `HA Mode `_. To enable user access to the Kubernetes API, a dedicated Netris L4LB service has been created in the Sandbox Controller. Furthermore, this L4LB address serves as ``K3S_URL`` environment variable for all nodes within the cluster. .. image:: /images/sandbox-l4lb-kubeapi.png :align: center + :alt: Sandbox L4LB KubeAPI + :target: ../../_images/sandbox-l4lb-kubeapi.png -To access the built-in Kubernetes cluster, put "Kubeconfig" file which you received by the introductory email into your ``~/.kube/config`` or set "KUBECONFIG" environment variable ``export KUBECONFIG=~/Downloads/config`` on your local machine. After that try to connect to the k8s cluster: +To access the built-in Kubernetes cluster, put the "Kubeconfig" file which you received via the introductory email into your ``~/.kube/config`` or set "KUBECONFIG" environment variable using ``export KUBECONFIG=~/Downloads/config`` on your local machine. Afterwards, try to connect to the k8s cluster: .. code-block:: shell-session kubectl cluster-info -The output below means you've successfully connected to the sandbox cluster: +If your output matches the one below, that means you've successfully connected to the Sandbox cluster: .. code-block:: shell-session - Kubernetes master is running at https://api.k8s-sandbox13.netris.io:6443 + Kubernetes control plane is running at https://api.k8s-Sandbox13.netris.io:6443 + CoreDNS is running at https://api.k8s-Sandbox13.netris.io:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy + Metrics-server is running at https://api.k8s-Sandbox13.netris.io:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy - To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. + To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Install Netris Operator ======================= -The first step to integrate the Netris Controller with the Kubernetes API is to install the Netris Operator. Installation can be accomplished by installing regular manifests or a `helm chart `_. For this example we will use the Kubernetes regular manifests: +The first step to integrate the Netris Controller with the Kubernetes API is to install the Netris Operator. Installation can be accomplished by installing a regular manifests or a `helm chart `_. For this example we will use the Kubernetes regular manifests: 1. Install the latest Netris Operator: @@ -44,7 +48,7 @@ The first step to integrate the Netris Controller with the Kubernetes API is to .. code-block:: shell-session kubectl -nnetris-operator create secret generic netris-creds \ - --from-literal=host='https://sandbox13.netris.io' \ + --from-literal=host='https://Sandbox13.netris.io' \ --from-literal=login='demo' --from-literal=password='Your Demo user pass' 3. Inspect the pod logs and make sure the operator is connected to Netris Controller: @@ -85,11 +89,12 @@ As you can see, the service type is "ClusterIP": .. code-block:: shell-session NAME READY STATUS RESTARTS AGE - pod/podinfo-576d5bf6bd-7z9jl 1/1 Running 0 49s - pod/podinfo-576d5bf6bd-nhlmh 1/1 Running 0 33s - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - service/podinfo ClusterIP 172.21.65.106 9898/TCP,9999/TCP 50s + pod/podinfo-7cf557d9d7-6gfwx 1/1 Running 0 34s + pod/podinfo-7cf557d9d7-nb2t7 1/1 Running 0 18s + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/kubernetes ClusterIP 10.43.0.1 443/TCP 33m + service/podinfo ClusterIP 10.43.68.103 9898/TCP,9999/TCP 35s In order to request access from outside, change the type to "LoadBalancer": @@ -107,13 +112,16 @@ Now we can see that the service type has changed to LoadBalancer, and "EXTERNAL- .. code-block:: shell-session - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - podinfo LoadBalancer 172.21.65.106 9898:32584/TCP,9999:30365/TCP 8m57s + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.43.0.1 443/TCP 37m + podinfo LoadBalancer 10.43.68.103 9898:32486/TCP,9999:30455/TCP 3m45s Going into the Netris Controller web interface, navigate to **Services → L4 Load Balancer**, and you may see L4LBs provisioning in real-time. If you do not see the provisioning process it is likely because it already completed. Look for the service with the name **"podinfo-xxxxxxxx"** .. image:: /images/sandbox-podinfo-prov.png :align: center + :alt: Sandbox PodInfo Provisioning + :target: ../../_images/sandbox-podinfo-prov.png After provisioning has finished, let's one more time look at service in k8s: @@ -125,8 +133,9 @@ You can see that "EXTERNAL-IP" has been injected into Kubernetes: .. code-block:: shell-session - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - podinfo LoadBalancer 172.21.65.106 45.38.161.157 9898:32584/TCP,9999:30365/TCP 9m17s + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.43.0.1 443/TCP 29m + podinfo LoadBalancer 10.43.42.190 45.38.161.157 9898:30771/TCP,9999:30510/TCP 5m14s Let's try to curl it (remember to replace the IP below with the IP that has been assigned in the previous command): @@ -139,17 +148,17 @@ The application is now accessible directly on the internet: .. code-block:: json { - "hostname": "podinfo-576d5bf6bd-nhlmh", - "version": "6.0.0", - "revision": "", - "color": "#34577c", - "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", - "message": "greetings from podinfo v6.0.0", - "goos": "linux", - "goarch": "amd64", - "runtime": "go1.16.5", - "num_goroutine": "8", - "num_cpu": "4" + "hostname": "podinfo-7cf557d9d7-6gfwx", + "version": "6.6.0", + "revision": "357009a86331a987811fefc11be1350058da33fc", + "color": "#34577c", + "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", + "message": "greetings from podinfo v6.6.0", + "goos": "linux", + "goarch": "amd64", + "runtime": "go1.21.7", + "num_goroutine": "8", + "num_cpu": "2" } As seen, "PodInfo" developers decided to expose 9898 port for HTTP, let's switch it to 80: @@ -162,6 +171,8 @@ Wait a few seconds, you can see the provisioning process on the controller: .. image:: /images/sandbox-podinfo-ready.png :align: center + :alt: Sandbox PodInfo Ready + :target: ../../_images/sandbox-podinfo-ready.png Curl again, without specifying a port: @@ -174,17 +185,17 @@ The output is similar to this: .. code-block:: json { - "hostname": "podinfo-576d5bf6bd-nhlmh", - "version": "6.0.0", - "revision": "", - "color": "#34577c", - "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", - "message": "greetings from podinfo v6.0.0", - "goos": "linux", - "goarch": "amd64", - "runtime": "go1.16.5", - "num_goroutine": "8", - "num_cpu": "4" + "hostname": "podinfo-7cf557d9d7-6gfwx", + "version": "6.6.0", + "revision": "357009a86331a987811fefc11be1350058da33fc", + "color": "#34577c", + "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", + "message": "greetings from podinfo v6.6.0", + "goos": "linux", + "goarch": "amd64", + "runtime": "go1.21.7", + "num_goroutine": "8", + "num_cpu": "2" } You can also verify the application is reachable by putting this IP address directly into your browser. @@ -216,8 +227,8 @@ As you can see, there are two L4LB resources, one for each podinfo's service por .. code-block:: shell-session NAME STATE FRONTEND PORT SITE TENANT STATUS AGE - podinfo-default-66d44feb-0278-412a-a32d-73afe011f2c6-tcp-80 active 45.38.161.157 80/TCP US/NYC Admin OK 33m - podinfo-default-66d44feb-0278-412a-a32d-73afe011f2c6-tcp-9999 active 45.38.161.157 9999/TCP US/NYC Admin OK 32m + podinfo-default-5bdf0a53-027d-449f-8896-547e06028c6b-tcp-80 active 45.38.161.157 80/TCP US/NYC Admin OK 7m21s + podinfo-default-5bdf0a53-027d-449f-8896-547e06028c6b-tcp-9999 active 45.38.161.157 9999/TCP US/NYC Admin OK 15m You can't edit/delete them, because Netris Operator will recreate them based on what was originally deployed in the service specifications. @@ -264,9 +275,9 @@ As you can see, provisioning started: .. code-block:: shell-session NAME STATE FRONTEND PORT SITE TENANT STATUS AGE - podinfo-default-d07acd0f-51ea-429a-89dd-8e4c1d6d0a86-tcp-80 active 45.38.161.157 80/TCP US/NYC Admin OK 2m17s - podinfo-default-d07acd0f-51ea-429a-89dd-8e4c1d6d0a86-tcp-9999 active 45.38.161.157 9999/TCP US/NYC Admin OK 3m47s - srv04-5-nyc-http active 45.38.161.158 80/TCP US/NYC Admin Provisioning 6s + podinfo-default-5bdf0a53-027d-449f-8896-547e06028c6b-tcp-80 active 45.38.161.157 80/TCP US/NYC Admin OK 9m56s + podinfo-default-5bdf0a53-027d-449f-8896-547e06028c6b-tcp-9999 active 45.38.161.157 9999/TCP US/NYC Admin OK 17m + srv04-5-nyc-http active 45.38.161.158 80/TCP US/NYC Admin Provisioning 5s When provisioning is finished, you should be able to connect to L4LB. Try to curl, using the L4LB frontend address displayed in the above command output: @@ -284,6 +295,8 @@ You can also inspect the L4LB in the Netris Controller web interface: .. image:: /images/sandbox-l4lbs.png :align: center + :alt: Sandbox L4LBs + :target: ../../_images/sandbox-l4lbs.png V-Net Custom Resource --------------------- @@ -298,16 +311,20 @@ Let's create our V-Net manifest: apiVersion: k8s.netris.ai/v1alpha1 kind: VNet metadata: - name: vnet-customer + name: vnet-customer spec: - ownerTenant: Demo - guestTenants: [] - sites: - - name: US/NYC - gateways: - - 192.168.46.1/24 - switchPorts: - - name: swp2@sw22-nyc + ownerTenant: Demo + guestTenants: [] + vlanId: "46" + sites: + - name: US/NYC + gateways: + - prefix: 192.168.46.1/24 + switchPorts: + - name: swp5@sw12-nyc + untagged: "no" + - name: swp5@sw21-nyc + untagged: "no" EOF And apply it: @@ -347,7 +364,7 @@ As we can see, the curl request shows the behavior of "round robin" between the SRV05-NYC curl 45.38.161.158 - SRV05-NYC + SRV04-NYC curl 45.38.161.158 SRV04-NYC @@ -364,18 +381,22 @@ BTW, if you already created "vnet-customer" V-Net as described in the :ref:`"Lea apiVersion: k8s.netris.ai/v1alpha1 kind: VNet metadata: - name: vnet-customer - annotations: - resource.k8s.netris.ai/import: "true" + name: vnet-customer + annotations: + resource.k8s.netris.ai/import: "true" spec: - ownerTenant: Demo - guestTenants: [] - sites: - - name: US/NYC - gateways: - - 192.168.46.1/24 - switchPorts: - - name: swp2@sw22-nyc + ownerTenant: Demo + guestTenants: [] + vlanId: "46" + sites: + - name: US/NYC + gateways: + - prefix: 192.168.46.1/24 + switchPorts: + - name: swp5@sw12-nyc + untagged: "no" + - name: swp5@sw21-nyc + untagged: "no" EOF Apply it: @@ -391,8 +412,8 @@ After applying the manifest containing "import" annotation, the V-Net, created f kubectl get vnet NAME STATE GATEWAYS SITES OWNER STATUS AGE - vnet-customer active 192.168.46.1/24 US/NYC Demo Active 7s - + vnet-customer active 192.168.46.1/24 US/NYC Demo Active 2m + BGP Custom Resource ------------------- @@ -417,8 +438,6 @@ Create a yaml file: localIP: 45.38.161.166/30 remoteIP: 45.38.161.165/30 description: Example BGP to ISP2 - prefixListInbound: - - permit 0.0.0.0/0 prefixListOutbound: - permit 45.38.161.144/28 le 32 EOF @@ -452,17 +471,17 @@ The output is similar to this: .. code-block:: shell-session - NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE - iris-isp2-ipv4-customer enabled bgp: Established; prefix: 160; time: 00:01:27 Link Up 65007 45.38.161.166/30 45.38.161.165/30 2m3s +NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE +iris-isp2-ipv4-customer enabled bgp: Established; prefix: 957240; time: 00:04:02 65007 45.38.161.166/30 45.38.161.165/30 2m3s Feel free to use the import annotation for this BGP if you created it from the Netris Controller web interface previously. -Return to the Netris Controller and navigate to **Net → Topology** to see the new BGP neighbor you created. +Return to the Netris Controller and navigate to **Network → Topology** to see the new BGP neighbor you created. Importing Existing Resources from Netris Controller to Kubernetes ----------------------------------------------------------------- -You can import any custom resources already created from the Netris Controller to k8s by adding the following annotation: + You can import any custom resources already created from the Netris Controller to k8s by adding the following annotation: .. code-block:: yaml @@ -491,7 +510,7 @@ Just add this annotation in any custom resource while creating it. Or if the cus Netris Calico CNI Integration ============================= -Netris Operator can integrate with Calico CNI, in your Sandbox k8s cluster, Calico has already been configured as the CNI, so you can try this integration. It will automatically create BGP peering between cluster nodes and the leaf/TOR switch for each node, then to clean up it will disable Calico Node-to-Node mesh. To understand why you need to configure peering between Kubernetes nodes and the leaf/TOR switch, and why you should disable Node-to-Node mesh, review the `calico docs `_. +Netris Operator can integrate with Calico CNI, in your Sandbox k8s cluster, Calico has already been configured as the CNI, so you can try this integration. It will automatically create BGP peering between cluster nodes and the leaf/TOR switch for each node, then to clean up it will disable Calico Node-to-Node mesh. To understand why you need to configure peering between Kubernetes nodes and the leaf/TOR switch, and why you should disable Node-to-Node mesh, review the `Calico docs `_. Integration is very simple, you just need to add the annotation in calico's ``bgpconfigurations`` custom resource. Before doing that, let's see the current state of ``bgpconfigurations``: @@ -499,11 +518,11 @@ Integration is very simple, you just need to add the annotation in calico's ``bg kubectl get bgpconfigurations default -o yaml -As we can see, ``nodeToNodeMeshEnabled`` is enabled, and ``asNumber`` is 64512 (it's Calico default AS number): +As we can see, ``nodeToNodeMeshEnabled`` is enabled: .. code-block:: yaml - apiVersion: crd.projectcalico.org/v1 + apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: annotations: @@ -511,8 +530,6 @@ As we can see, ``nodeToNodeMeshEnabled`` is enabled, and ``asNumber`` is 64512 ( name: default ... spec: - asNumber: 64512 - logSeverityScreen: Info nodeToNodeMeshEnabled: true Let's enable the "netris-calico" integration: @@ -531,11 +548,11 @@ Here are our freshly created BGPs, one for each k8s node: .. code-block:: shell-session - NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE - iris-isp2-ipv4-customer enabled bgp: Established; prefix: 160; time: 00:06:18 Link Up 65007 45.38.161.166/30 45.38.161.165/30 7m59s - sandbox13-srv06-nyc-192.168.110.66 enabled 4230000000 192.168.110.1/24 192.168.110.66/24 26s - sandbox13-srv07-nyc-192.168.110.67 enabled 4230000001 192.168.110.1/24 192.168.110.67/24 26s - sandbox13-srv08-nyc-192.168.110.68 enabled 4230000002 192.168.110.1/24 192.168.110.68/24 26s + NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE + iris-isp2-ipv4-customer enabled bgp: Established; prefix: 957241; time: 00:15:03 65007 45.38.161.166/30 45.38.161.165/30 16m + sandbox-srv06-192.168.110.66 enabled 4230000000 192.168.110.1/24 192.168.110.66/24 37s + sandbox-srv07-192.168.110.67 enabled 4230000001 192.168.110.1/24 192.168.110.67/24 37s + sandbox-srv08-192.168.110.68 enabled 4230000002 192.168.110.1/24 192.168.110.68/24 37s You might notice that peering neighbor AS is different from Calico's default 64512. The is because the Netris Operator is setting a particular AS number for each node. @@ -549,11 +566,11 @@ As we can see, our BGP peers have become established: .. code-block:: shell-session - NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE - iris-isp2-ipv4-customer enabled bgp: Established; prefix: 160; time: 00:07:48 Link Up 65007 45.38.161.166/30 45.38.161.165/30 8m41s - sandbox13-srv06-nyc-192.168.110.66 enabled bgp: Established; prefix: 5; time: 00:00:44 N/A 4230000000 192.168.110.1/24 192.168.110.66/24 68s - sandbox13-srv07-nyc-192.168.110.67 enabled bgp: Established; prefix: 5; time: 00:00:19 N/A 4230000001 192.168.110.1/24 192.168.110.67/24 68s - sandbox13-srv08-nyc-192.168.110.68 enabled bgp: Established; prefix: 5; time: 00:00:44 N/A 4230000002 192.168.110.1/24 192.168.110.68/24 68s + NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE + iris-isp2-ipv4-customer enabled bgp: Established; prefix: 957194; time: 00:18:24 65007 45.38.161.166/30 45.38.161.165/30 19m + sandbox-srv06-192.168.110.66 enabled bgp: Established; prefix: 1; time: 00:01:26 N/A 4230000000 192.168.110.1/24 192.168.110.66/24 2m7s + sandbox-srv07-192.168.110.67 enabled bgp: Established; prefix: 1; time: 00:01:26 N/A 4230000001 192.168.110.1/24 192.168.110.67/24 2m7s + sandbox-srv08-192.168.110.68 enabled bgp: Established; prefix: 1; time: 00:01:26 N/A 4230000002 192.168.110.1/24 192.168.110.68/24 2m7s Now let's check if ``nodeToNodeMeshEnabled`` is still enabled: @@ -565,16 +582,16 @@ It is disabled, which means the "netris-calico" integration process is finished: .. code-block:: yaml - apiVersion: crd.projectcalico.org/v1 + apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: annotations: + ... manage.k8s.netris.ai/calico: "true" ... name: default ... spec: - asNumber: 64512 nodeToNodeMeshEnabled: false .. note:: @@ -592,17 +609,17 @@ Yes, it works: .. code-block:: json { - "hostname": "podinfo-576d5bf6bd-mfpdt", - "version": "6.0.0", - "revision": "", - "color": "#34577c", - "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", - "message": "greetings from podinfo v6.0.0", - "goos": "linux", - "goarch": "amd64", - "runtime": "go1.16.5", - "num_goroutine": "8", - "num_cpu": "4" + "hostname": "podinfo-7cf557d9d7-nb2t7", + "version": "6.6.0", + "revision": "357009a86331a987811fefc11be1350058da33fc", + "color": "#34577c", + "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", + "message": "greetings from podinfo v6.6.0", + "goos": "linux", + "goarch": "amd64", + "runtime": "go1.21.7", + "num_goroutine": "8", + "num_cpu": "2" } Disabling Netris-Calico Integration @@ -618,4 +635,4 @@ or change its value to ``"false"``. .. topic:: Milestone 2 - Congratulations! You completed Milestone 2. Time to get yourself another iced coffee or even a beer depending on what time it is! + Congratulations on completing Milestone 2! diff --git a/sandbox/Sandbox13/sandbox-info.rst b/sandbox/Sandbox13/sandbox-info.rst index 36d8ead019..81a8511eb9 100644 --- a/sandbox/Sandbox13/sandbox-info.rst +++ b/sandbox/Sandbox13/sandbox-info.rst @@ -2,10 +2,13 @@ Welcome to Netris Sandbox ************************* +.. contents:: + :local: + Netris Sandbox is a ready-to-use environment for testing Netris automatic NetOps. We have pre-created some example services for you, details of which can be found in the :ref:`"Provided Example Configurations"` document. Feel free to view, edit, delete, and create new services. In case of any questions, reach out to us on `Slack `__. -The credentials for the sandbox have been provided to you by email in response to your Sandbox request. +The credentials for the sandbox have been provided to you via email in response to your Sandbox request. The Sandbox environment includes: @@ -22,24 +25,24 @@ The Sandbox environment includes: Topology diagram ================ -.. image:: /images/sandbox_topology.png +.. image:: /images/sandbox_topology_new.png :align: center :alt: Sandbox Topology - :target: ../../_images/sandbox_topology.png - + :target: ../../_images/sandbox_topology_new.png Netris Controller ================= -https://sandbox13.netris.io +https://Sandbox13.netris.io Linux servers ============= Example pre-configured Netris services: - * **srv01-nyc**, **srv02-nyc**, **srv03-nyc** & **Netris Controller** - are consuming :ref:`"ROH (Routing on the Host)"` Netris example service, see **Services → ROH.** - * **srv01-nyc**, **srv02-nyc** - are behind :ref:`"Anycast L3 load balancer"`, see **Services → Load Balancer**. - * **srv04-nyc**, **srv05-nyc** - are consuming :ref:`"V-NET (routed VXLAN)"` Netris service, see **Services → V-NET**. + * **srv01-nyc**, **srv02-nyc**, **srv03-nyc** & **Netris Controller** - are consuming :ref:`"ROH (Routing on the Host)"` Netris example service, see **Services → ROH.**. + * **srv01-nyc**, **srv02-nyc** - can be configured with :ref:`"L3 Load Balancer (Anycast LB)"`, see **Services → L3 Load Balancer**. + * **srv04-nyc**, **srv05-nyc**, **srv06-nyc**, **srv07-nyc** & **srv08-nyc** - are consuming :ref:`"V-Net (routed VXLAN)"` Netris service, see **Services → V-Net**. + * **srv06-nyc**, **srv07-nyc**, **srv08-nyc** - are members of a 3 node Kubernetes cluser, and the K8s API servers are behind :ref:`"L4 Load Balancer (L4LB)"`, see **Services → L4 Load Balancer**. **Accessing the Linux servers:** @@ -61,7 +64,7 @@ This Sandbox provides an up and running 3 node Kubernetes cluster. You can integ Upstream ISP ============ This Sandbox also provides an upstream ISP service with real-world Internet routing configured through :ref:`"BGP"`. -There are two pre-configured examples under **NET → E-BGP** , one using IPv4 and the other using IPv6, which are advertising the public IP subnets belonging to the sandbox to the upstream ISP IRIS. +There are two pre-configured examples under **Network → E-BGP** , one using IPv4 and the other using IPv6, which are advertising the public IP subnets belonging to the Sandbox to the upstream ISP IRIS. ISP settings: @@ -101,7 +104,7 @@ ISP settings: Networks Used ============= -Allocations and subnets defined under :ref:`"IPAM"`, see **Net → IPAM**. +Allocations and subnets defined under :ref:`"IPAM"`, see **Network → IPAM**. .. code-block:: shell-session @@ -131,4 +134,4 @@ Allocations and subnets defined under :ref:`"IPAM"`, see **Net → IPA | EXAMPLE IPv6 Allocation: 2607:f358:11:ffcd::/64 |___ EXAMPLE IPv6 Subnet: 2607:f358:11:ffcd::/64 - + \ No newline at end of file