diff --git a/sandbox/Sandbox9/configurations.rst b/sandbox/Sandbox9/configurations.rst index b5f67e54dd..2c3bcf650d 100644 --- a/sandbox/Sandbox9/configurations.rst +++ b/sandbox/Sandbox9/configurations.rst @@ -1,83 +1,91 @@ -.. _s9-pre-configured: - -******************************** -Provided Example Configurations -******************************** -Once you log into the Netris Controller, you will find that certain services have already been pre-configured for you to explore and interact with. You can also learn how to create some of these services yourself by following the step-by-step instructions in the :ref:`"Learn by Creating Services"` document. - -V-Net (Ethernet/Vlan/VXlan) Example -=================================== -After logging into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigating to **Services → V-Net**, you will find a V-Net service named "**vnet-example**" already configured for you as an example. - -If you examine the particular service settings ( select **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**vnet-example**" service), you will find that the services is configured on the second port of **switch 21** named "**swp2(swp2)@sw21-nyc (Admin)**". - -The V-Net servicers is also configured with both an IPv4 and IPv6 gateway, **192.168.45.1** (from the "**192.168.45.0/24 (EXAMPLE)**" subnet) and **2607:f358:11:ffc9::1** (from the "**2607:f358:11:ffc9::/64 (EXAMPLE IPv6)**" subnet) respectively. - -You may also verify that the service is working properly from within the GUI: (*\*Fields not specified should remain unchanged and retain default values*) - -1. Navigate to **Net → Looking Glass**. -2. Select switch "**sw21-nyc(10.254.46.21)**" (the switch the "**vnet-example**" service is configured on) from the **Select device** drop-down menu. -3. Select "**Ping**" from the **Command** drop-down menu. -4. Type ``192.168.45.64`` (the IP address of **srv04-nyc** connected to **swp2@sw21-nyc**) in the field labeled **IPv4 address**. -5. Click **Submit**. - -The result should look similar to the output below, indicating that the communication between switch **sw21-nyc** and server **srv04-nyc** is working properly thanks to the configured V-Net service. - -.. code-block:: shell-session - - sw21-nyc# ip vrf exec Vrf_netris ping -c 5 192.168.45.64 - PING 192.168.45.64 (192.168.45.64) 56(84) bytes of data. - 64 bytes from 192.168.45.64: icmp_seq=1 ttl=64 time=0.562 ms - 64 bytes from 192.168.45.64: icmp_seq=2 ttl=64 time=0.745 ms - 64 bytes from 192.168.45.64: icmp_seq=3 ttl=64 time=0.690 ms - 64 bytes from 192.168.45.64: icmp_seq=4 ttl=64 time=0.737 ms - 64 bytes from 192.168.45.64: icmp_seq=5 ttl=64 time=0.666 ms - - --- 192.168.45.64 ping statistics --- - 5 packets transmitted, 5 received, 0% packet loss, time 4092ms - rtt min/avg/max/mdev = 0.562/0.680/0.745/0.065 ms - -If you are interested in learning how to create a V-Net service yourself, please refer to the step-by-step instructions found in the :ref:`"V-Net (Ethernet/Vlan/VXlan)"` section of the :ref:`"Learn by Creating Services"` document. - -More details about V-Net (Ethernet/Vlan/VXlan) can be found on the the :ref:`"V-NET"` page. - -E-BGP (Exterior Border Gateway Protocol) Example -================================================ - -Navigate to **Net → E-BGP**. Here, aside from the necessary system generated IPv4/IPv6 E-BGP peer connections between the two border routers ( **SoftGate1** & **SoftGate2** ) and the rest of the switching fabric (which can be toggled on/off using the **Show System Generated** toggle at the top of the page), you will also find two E-BGP sessions named "**iris-isp1-ipv4-example**" and "**iris-isp1-ipv6-example**" configured as example with **IRIS ISP1**. This ensures communication between the internal network with the Internet. - -You may examine the particular session configurations of the E-BGP connections by selecting **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of either the "**iris-isp1-ipv4-example**" and "**iris-isp1-ipv6-example**" connections. You may also expand the **Advanced** section located toward the bottom of the **Edit** window to be able to access the more advanced settings available while configuring an E-BGP session. - -If you are interested in learning how to create an additional E-BGP session with **IRIS ISP2** in order to make the Sandbox upstream connections fault tolerant yourself, please refer to the step-by-step instructions found in the :ref:`"E-BGP (Exterior Border Gateway Protocol)"` section of the :ref:`"Learn by Creating Services"` document. - -More details about E-BGP (Exterior Border Gateway Protocol) can be found on the the :ref:`"BGP"` page. - -NAT (Network Address Translation) Example -========================================= -Navigate to **Net → NAT** and you will find a NAT rule named "**NAT Example**" configured as an example for you. The configured "**SNAT**" rule ensures that there can be communication between the the private "**192.168.45.0/24 (EXAMPLE)**" subnet and the Internet. - -You can examine the particular settings of the NAT rule by clicking **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**NAT Example**" service. - -You may also observe the functioning NAT rule in action by pinging any public IP address (e.g. **1.1.1.1**) from the **srv04-nyc** server. - -* In a terminal window: - - 1. SSH to server **srv04-nyc**: ``ssh demo@166.88.17.22 -p 30064``. - 2. Enter the password provided in the introductory e-mail. - 3. Start a ping session: ``ping4 1.1.1.1`` - -You will see replies in the form of "**64 bytes from 1.1.1.1: icmp_seq=1 ttl=62 time=1.10 ms**" indicating proper communication with the **1.1.1.1** public IP address. - -If you are interested in learning how to create a NAT rule yourself, please refer to the step-by-step instructions found in the :ref:`"NAT (Network Address Translation)"` section of the :ref:`"Learn by Creating Services"` document. - -More details about NAT (Network Address Translation) can be found on the :ref:`"NAT"` page. - -ACL (Access Control List) Example -================================= -Navigate to **Services → ACL** and you will find an ACL services named "**V-Net Example to WAN**" set up as an example for you. This particular ACL ensures that the connectivity between the the private "**192.168.45.0/24 (EXAMPLE)**" subnet and the Internet is permitted through all protocols and ports, even in a scenario where the the "**ACL Default Policy**" for the "**US/NYC**" site configured under **Net → Sites** in our Sandbox is changed from **Permit** to **Deny**. - -You can examine the particular settings of this ACL policy by selecting **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**V-Net Example to WAN**" ACL policy. - -By utilizing ACLs, you can impose granular controls and implement policies that would permit or deny particular connections of any complexity. If you are interested in learning how to create ACL policies yourself, please refer to the step-by-step instructions found in the :ref:`"ACL (Access Control List)"` section of the :ref:`"Learn by Creating Services"` document. - -More details about ACL (Access Control List) can be found on the :ref:`"ACL"` page. \ No newline at end of file +.. _s9-pre-configured: + +******************************** +Provided Example Configurations +******************************** + +.. contents:: + :local: + +Once you log into the Netris Controller, you will find that certain services have already been pre-configured for you to explore and interact with. You can also learn how to create some of these services yourself by following the step-by-step instructions in the :ref:`"Learn by Creating Services"` document. + +V-Net (Ethernet/Vlan/VXlan) Example +=================================== +To access the V-Net service example, first log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigating to **Services → V-Net**, where you will find a pre-configured V-Net service named "**vnet-example**" available as an example. + +To examine the service settings, select **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**vnet-example**" service, where you'll see that the V-Net service is configured with VLAN ID **45** in order to enable **EVPN Multihoming** on the underlying switches. + +You'll also see that the V-Net service is configured with both an IPv4 gateway (**192.168.45.1**) from the "**192.168.45.0/24 (EXAMPLE)**" subnet and an IPv6 gateway (**2607:f358:11:ffc9::1**) from the "**2607:f358:11:ffc9::/64 (EXAMPLE IPv6)**" subnet. + +Additionally, the V-Net service is configured to utilize network interfaces on both switches 21 and 22. Specifically, it is connected to **swp4(swp4)@sw21-nyc (Admin)** on switch 21 and **swp4(swp4)@sw22-nyc (Admin)** on switch 22. + +You may also verify that the service is working properly from within the GUI: (*\*Fields not specified should remain unchanged and retain default values*) + +1. Navigate to **Network → Looking Glass**. +2. Make sure "**vpc-1:Default**" is selected from the **VPC** drop-down menu. +3. Select "**SoftGate1(50.117.59.192)**" from the **Hardware** drop-down menu. +4. Leave the "**Family: IPV4**" as the selected choice from the **Adrress Family** drop-down menu. +5. Select "**Ping**" from the **Command** drop-down menu. +6. Leave the "**Selecet IP address**" as the selected choice from the **Source** drop-down menu. +7. Type ``192.168.45.64`` (the IP address configured on **bond0.45** on **srv04-nyc**) in the field labeled **IPv4 address**. +8. Click **Submit**. + +The result should look similar to the output below, indicating that the communication between SoftGate **SoftGate1** and server **srv04-nyc** is working properly thanks to the configured V-Net service. + +.. code-block:: shell-session + + SoftGate1# ping -c 5 192.168.45.64 + PING 192.168.45.64 (192.168.45.64) 56(84) bytes of data. + 64 bytes from 192.168.45.64: icmp_seq=1 ttl=61 time=6.29 ms + 64 bytes from 192.168.45.64: icmp_seq=2 ttl=61 time=5.10 ms + 64 bytes from 192.168.45.64: icmp_seq=3 ttl=61 time=4.82 ms + 64 bytes from 192.168.45.64: icmp_seq=4 ttl=61 time=4.82 ms + 64 bytes from 192.168.45.64: icmp_seq=5 ttl=61 time=4.79 ms + --- 192.168.45.64 ping statistics --- + 5 packets transmitted, 5 received, 0% packet loss, time 4002ms + rtt min/avg/max/mdev = 4.787/5.161/6.285/0.572 ms + +If you are interested in learning how to create a V-Net service yourself, please refer to the step-by-step instructions found in the :ref:`"V-Net (Ethernet/Vlan/VXlan)"` section of the :ref:`"Learn by Creating Services"` document. + +More details about V-Net (Ethernet/Vlan/VXlan) can be found on the the :ref:`V-Net"` page. + +E-BGP (Exterior Border Gateway Protocol) Example +================================================ + +Navigate to **Network → E-BGP**. Here, aside from the required system generated IPv4/IPv6 E-BGP peer connections between the two border routers ( **SoftGate1** & **SoftGate2** ) and the rest of the switching fabric (which can be toggled on/off using the **Show System Generated** toggle at the top of the page), you will also find two E-BGP sessions named "**iris-isp1-ipv4-example**" and "**iris-isp1-ipv6-example**" configured as examples with **IRIS ISP1**. This ensures communication between the internal network and the Internet. + +You may examine the particular session configurations of the E-BGP connections by selecting **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of either the "**iris-isp1-ipv4-example**" and "**iris-isp1-ipv6-example**" connections. You may also expand the **Advanced** section located toward the bottom of the **Edit** window to be able to access the more advanced settings available while configuring an E-BGP session. + +If you are interested in learning how to create an additional E-BGP session with **IRIS ISP2** in order to make the Sandbox upstream connections fault tolerant yourself, please refer to the step-by-step instructions found in the :ref:`"E-BGP (Exterior Border Gateway Protocol)"` section of the :ref:`"Learn by Creating Services"` document. + +More details about E-BGP (Exterior Border Gateway Protocol) can be found on the the :ref:`"BGP"` page. + +NAT (Network Address Translation) Example +========================================= +Navigate to **Network → NAT** and you will find a NAT rule named "**NAT Example**" configured as an example for you. The configured "**SNAT**" rule ensures that there can be communication between the the private "**192.168.45.0/24 (EXAMPLE)**" subnet and the Internet. + +You can examine the particular settings of the NAT rule by clicking **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**NAT Example**" service. + +You may also observe the functioning NAT rule in action by pinging any public IP address (e.g. **1.1.1.1**) from the **srv04-nyc** server. + +* In a terminal window: + + 1. SSH to server **srv04-nyc**: ``ssh demo@166.88.17.22 -p 30064``. + 2. Enter the password provided in the introductory e-mail. + 3. Start a ping session: ``ping4 1.1.1.1`` + +You will see replies in the form of "**64 bytes from 1.1.1.1: icmp_seq=1 ttl=62 time=1.10 ms**" indicating proper communication with the **1.1.1.1** public IP address. + +If you are interested in learning how to create a NAT rule yourself, please refer to the step-by-step instructions found in the :ref:`"NAT (Network Address Translation)"` section of the :ref:`"Learn by Creating Services"` document. + +More details about NAT (Network Address Translation) can be found on the :ref:`"NAT"` page. + +ACL (Access Control List) Example +================================= +Navigate to **Services → ACL** and you will find an ACL services named "**V-Net Example to WAN**" set up as an example for you. This particular ACL ensures that the connectivity between the the private "**192.168.45.0/24 (EXAMPLE)**" subnet and the Internet is permitted through all protocols and ports, even in a scenario where the the "**ACL Default Policy**" for the "**US/NYC**" site configured under **Network → Sites** in our Sandbox is changed from **Permit** to **Deny**. + +You can examine the particular settings of this ACL policy by selecting **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**V-Net Example to WAN**" ACL policy. + +By utilizing ACLs, you can impose granular controls and implement policies that would permit or deny particular connections of any complexity. If you are interested in learning how to create ACL policies yourself, please refer to the step-by-step instructions found in the :ref:`"ACL (Access Control List)"` section of the :ref:`"Learn by Creating Services"` document. + +More details about ACL (Access Control List) can be found on the :ref:`"ACL"` page. diff --git a/sandbox/Sandbox9/creating-services.rst b/sandbox/Sandbox9/creating-services.rst index 78df2f26d5..a7cd16b035 100644 --- a/sandbox/Sandbox9/creating-services.rst +++ b/sandbox/Sandbox9/creating-services.rst @@ -1,200 +1,206 @@ -.. _s9-learn-by-doing: - -************************** -Learn by Creating Services -************************** - -Following these short exercises we will be able to demonstrate how the :ref:`Netris Controller`, in conjunction with the :ref:`Netris Agents` deployed on the switches and SoftGates, is able to intelligently and automagically deploy the necessary configurations across the network fabric to provision the desired services within a matter of seconds. - -.. _s9-v-net: - -V-Net (Ethernet/Vlan/VXlan) -=========================== -Let's create a V-Net service to give server **srv05-nyc** the ability to reach its gateway address. - -* In a terminal window: - - 1. SSH to server **srv05-nyc**: ``ssh demo@166.88.17.22 -p 30065``. - 2. Enter the password provided in the introductory e-mail. - 3. Type ``ip route ls`` and we can see **192.168.46.1** is configured as the default gateway, indicated by the "**default via 192.168.46.1 dev eth1 proto kernel onlink**" line in the output. - 4. Start a ping session towards the default gateway: ``ping 192.168.46.1`` - 5. Keep the ping running as an indicator for when the service becomes fully provisioned. - 6. Until the service is provisioned, we can see that the destination is not reachable judging by the outputs in the form of "**From 192.168.46.64 icmp_seq=1 Destination Host Unreachable**". - -* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - - 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Services → V-Net**. - 2. Click the **+ Add** button in the top right corner of the page to get started with creating a new V-Net service. - 3. Define a name in the **Name** field (e.g. ``vnet-customer``). - 4. From the **Sites** drop-down menu, select "**US/NYC**". - 5. From the **Owner** drop-down menu, select "**Demo**". - 6. From the **IPv4 Gateway** drop-down menu, select the "**192.168.46.0/24(CUSTOMER)**" subnet. - 7. The first available IP address "**192.168.46.1**" is automatically selected in the second drop-down menu of the list of IP addresses. This matches the results of the ``ip route ls`` command output on **srv05-nyc** we observed earlier. - 8. From the **Add Network Interface** drop-down menu put a check mark next to switch port "**swp2(swp2 | srv05-nyc)@sw22-nyc (Demo)**", which we can see is the the port where **srv05-nyc** is wired into when we reference the :ref:`"Sandbox Topology diagram"`. - - * The drop-down menu only contains this single switch port as it is the only port that has been assigned to the **Demo** tenant. - - 9. Check the **Untag** check-box and click the **Add** button. - 10. Click the **Add** button at the bottom right of the "**Add new V-Net**" window and the service will start provisioning. - -After just a few seconds, once fully provisioned, you will start seeing successful ping replies, similar in form to "**64 bytes from 192.168.46.1: icmp_seq=55 ttl=64 time=1.66 ms**", to the ping that was previously started in the terminal window, indicating that now the gateway address is reachable from host **srv05-nyc**. - -More details about V-Net (Ethernet/Vlan/VXlan) can be found on the the :ref:`"V-NET"` page. - -.. _s9-e-bgp: - -E-BGP (Exterior Border Gateway Protocol) -======================================== -Our internal network is already connected with the outside world so that our servers can communicate with the Internet through the E-BGP session with IRIS ISP1 named "**iris-isp1-example**". - -Optionally you can configure an E-BGP session to IRIS ISP2 for fault tolerance. - -* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - - 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Net → E-BGP**. - 2. Click the **+ Add** button in the top right corner of the page to configure a new E-BGP session. - 3. Define a name in the **Name** field (e.g. ``iris-isp2-ipv4-customer``). - 4. From the **Site** drop-down menu, select "**US/NYC**". - 5. From the **BGP Router** drop-down menu, select "**SoftGate2**". - 6. From the **Switch Port** drop-down menu, select port "**swp16(swp16 | ISP2)@sw02-nyc (Admin)**" on the switch that is connected to the ISP2. - - * For the purposes of this exercise, the required switch port can easily be found by typing ``ISP2`` in the Search field. - - 7. For the **VLAN ID** field, uncheck the **Untag** check-box and type in ``1092``. - 8. In the **Neighbor AS** field, type in ``65007``. - 9. In the **Local IP** field, type in ``50.117.59.126``. - 10. In the **Remote IP** field, type in ``50.117.59.125``. - 11. Expand the **Advanced** section - 12. In the **Prefix List Inbound** field, type in ``permit 0.0.0.0/0`` - 13. In the **Prefix List Outbound** field, type in ``permit 50.117.59.192/28 le 32`` - 14. And finally click **Add** - -Allow up to 1 minute for both sides of the BGP sessions to come up and then the BGP state on **Net → E-BGP** page as well as on **Telescope → Dashboard** pages will turn green, indication a successfully established BGP session. We can glean further insight into the BGP session details by navigating to **Net → Looking Glass**. - - 1. Select "**SoftGate2(50.117.59.193)**" (the border router where our newly created BGP session is terminated on) from the **Select device** drop-down menu. - 2. Leaving the **Family** drop-down menu on IPv4 and the **Command** drop-down menu on "**BGP Summary**", click on the **Submit** button. - -We are presented with the summary of the BGP sessions terminated on **SoftGate2**. You can also click on each BGP neighbor name to further see the "**Advertised routes**" and "**Routes**" received to/from that BGP neighbor. - -More details about E-BGP (Exterior Border Gateway Protocol) can be found on the the :ref:`"BGP"` page. - -.. _s9-nat: - -NAT (Network Address Translation) -================================= -Now that we have both internal and external facing services, we can aim for our **srv05-nyc** server to be able to communicate with the Internet. - -* In a terminal window: - - 1. SSH to server **srv05-nyc**: ``ssh demo@166.88.17.22 -p 30065``. - 2. Enter the password provided in the introductory e-mail. - 3. Start a ping session towards any public IP address (e.g. ``ping 1.1.1.1``). - 4. Keep the ping running as an indicator for when the service starts to work. - -Let's configure a source NAT so our Customer subnet **192.168.46.0/24**, which is used in the V-Net services called **vnet-customer**, can communicate with the Internet. - -* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - - 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Net → NAT**. - 2. Click the **+ Add** button in the top right corner of the page to define a new NAT rule. - 3. Define a name in the **Name** field (e.g. ``NAT Customer``). - 4. From the **Site** drop-down menu, select "**US/NYC**". - 5. From the **Action** drop-down menu, select "**SNAT**". - 6. From the **Protocol** drop-down menu, select "**ALL**". - 7. In the **Source Address** field, type in ``192.168.46.0/24``. - 8. In the **Destination Address** field, type in ``0.0.0.0/0``. - 9. Toggle the switch from **SNAT to Pool** to **SNAT to IP**. - 10. From the **Select subnet** drop-down menu, select the "**50.117.59.196/30 (NAT)**" subnet. - 11. From the **Select IP** drop-down menu, select the "**50.117.59.196/32**" IP address. - - * This public IP is part of **50.117.59.196/30 (NAT)** subnet which is configured in the **NET → IPAM** section with the purpose of **NAT** and indicated in the SoftGate configurations to be used as a global IP for NAT by the :ref:`"Netris SoftGate Agent"`.. - - 12. Click **Add** - -Soon you will start seeing replies similar in form to "**64 bytes from 1.1.1.1: icmp_seq=1 ttl=62 time=1.23 ms**" to the ping previously started in the terminal window, indicating that now the Internet is reachable from **srv05-nyc**. - -More details about NAT (Network Address Translation) can be found on the :ref:`"NAT"` page. - -.. _s9-l3lb: - -L3LB (Anycast L3 load balancer) -=============================== -In this exercise we will quickly configure an Anycast IP address in the Netris Controller for two of our :ref:`"ROH (Routing on the Host)"` servers (**srv01-nyc** & **srv02-nyc**) which both have a running Web Server configured to display a simple HTML webpage and observe **ECMP** load balancing it in action. - -* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - - 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Services → ROH**. - 2. Click **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**srv01-nyc**" server. - 3. From the **IPv4** drop-down menu, select the "**50.117.59.200/30 (L3 LOAD BALANCER)**" subnet. - 4. From the second drop-down menu that appears to the right, select the first available IP "**50.117.59.216**". - 5. Check the **Anycast** check-box next to the previously selected IP and click the **Save** button. - 6. Repeat steps **3** through **4** for "**srv02-nyc**" by first clicking **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**srv02-nyc**" server. - - * While editing "**srv02-nyc**", after selecting the "**50.117.59.216**" IP address , the **Anycast** check-box will already be automatically checked as we had designated the IP address as such in step **5**. - -* In a new web browser window/tab: - - 1. Type in the Anycast IP address we just configured (**50.117.59.216**) into the browser's address bar or simply visit `http://50.117.59.216/ `_. - 2. Based on the unique hash calculated from factors such as source IP/Protocol/Port, the **L3LB** will use **ECMP** to load balance the traffic from your browser to either **srv01-nyc** or **srv02-nyc**, with the text on the website indicating where the traffic ended up. - - * It should be noted that the TCP session will continue to exist between the given end-user and server pair for the lifetime of the session. In our case we have landed on **srv01-nyc**. - -.. image:: /images/l3lb_srv01.png - :align: center - -In order to trigger the L3 load balancer to switch directing the traffic towards the other backend server (in this case from **srv01-nyc** to **srv02-nyc**, which based on the unique hash in your situation could be the other way around), we can simulate the unavailability of backend server we ended up on by putting it in **Maintenance** mode. - -* Back in the Netris Controller, navigate to **Services → L3 Load Balancer**. - - 1. Expand the **LB Vip** that was created when we defined the **Anycast** IP address earlier by clicking on the **>** to the left of "**50.117.59.216 (name_50.117.59.216)**". - 2. Click **Action v** to the right of the server you originally ended up on (in this case **srv01-nyc**). - 3. Click **Maintenance on**. - 4. Click **Maintenance** one more time in the pop-up window. - -* Back in the browser window/tab directed at the **50.117.59.216** Anycast IP address. - - 1. After just a few seconds, we can observe that now the website indicates that the traffic is routed to **srv02-nyc** (once more, your case could be opposite for you based on the original hash). - -.. image:: /images/l3lb_srv02.png - :align: center - -More details about AL3LB (Anycast L3 load balancer) can be found on the :ref:`"L3 Load Balancer (Anycast LB)"` page. - -.. _s9-acl: - -ACL (Access Control List) -========================= -Now that **srv05-nyc** can communicate with both internal and external hosts, let's check Access Policy and Control options. - -* In a terminal window: - - 1. SSH to server **srv05-nyc**: ``ssh demo@166.88.17.22 -p 30065``. - 2. Enter the password provided in the introductory e-mail. - 3. Start a ping session: ``ping 1.1.1.1``. - 4. If the previous steps were followed, you should see successful ping replies in the form of "**64 bytes from 1.1.1.1: icmp_seq=55 ttl=62 time=1.23 ms**". - 5. Keep the ping running as an indicator for when the service starts to work. - -* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) - - 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Net → Sites**. - 2. Click **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the **UC/NYC** site. - 3. From the **ACL Default Policy** drop-down menu, change the value from "**Permit**" to "**Deny**". - 4. Click **Save**. - -Soon you will notice that there are no new replies to our previously started ``ping 1.1.1.1`` command in the terminal window, indicating that the **1.1.1.1** IP address is no longer reachable.Now that the **Default ACL Policy** is set to **Deny**, we need to configure an **ACL** entry that will allow the **srv05-nyc** server to communicate with the Internet. - -* Back in the web browser: (*\*Fields not specified should remain unchanged and retain default values*) - - 1. Navigate to **Services → ACL**. - 2. Click the **+ Add** button in the top right corner of the page to define a new ACL. - 3. Define a name in the **Name** field (e.g. ``V-Net Customer to WAN``). - 4. From the **Protocol** drop-down menu, select "**ALL**". - 5. In the Source field, type in ``192.168.46.0/24``. - 6. In the Destination field, type in ``0.0.0.0/0``. - 7. Click **Add**. - 8. Select **Approve** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the newly created "**V-Net Customer to WAN**" ACL. - 9. Click **Approve** one more time in the pop-up window. - -Once the Netris Controller has finished syncing the new ACL policy with all member devices, we can see in the terminal window that replies to our ``ping 1.1.1.1`` command have resumed, indicating that the **srv05-nyc** server can communicate with the Internet once again.. - -More details about ACL (Access Control List) can be found on the :ref:`"ACL"` page. +.. _s9-learn-by-doing: + +************************** +Learn by Creating Services +************************** + +.. contents:: + :local: + +Following these short exercises we will be able to demonstrate how the :ref:`Netris Controller`, in conjunction with the :ref:`Netris Agents` deployed on the switches and SoftGates, is able to intelligently and automagically deploy the necessary configurations across the network fabric to provision the desired services within a matter of seconds. + +.. _s9-v-net: + +V-Net (Ethernet/Vlan/VXlan) +=========================== +Let's create a V-Net service to give server **srv05-nyc** the ability to reach its gateway address. + +* In a terminal window: + + 1. SSH to server **srv05-nyc**: ``ssh demo@166.88.17.22 -p 30065``. + 2. Enter the password provided in the introductory e-mail. + 3. Type ``ip route ls`` and we can see **192.168.46.1** is configured as the default gateway, indicated by the "**default via 192.168.46.1 dev eth1 proto kernel onlink**" line in the output. + 4. Start a ping session towards the default gateway: ``ping 192.168.46.1`` + 5. Keep the ping running as an indicator for when the service becomes fully provisioned. + 6. Until the service is provisioned, we can see that the destination is not reachable judging by the outputs in the form of "**From 192.168.46.64 icmp_seq=1 Destination Host Unreachable**". + +* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) + + 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Services → V-Net**. + 2. Click the **+ Add** button in the top right corner of the page to get started with creating a new V-Net service. + 3. Define a name in the **Name** field (e.g. ``vnet-customer``). + 4. From the **Sites** drop-down menu, select "**US/NYC**". + 5. From the **VLAN ID** drop-down menu, select "**Enter manually**" and type in "**46**" in the field to the right. + 6. From the **Owner** drop-down menu, select "**Demo**". + 7. From the **IPv4 Gateway** drop-down menu, select the "**192.168.46.0/24(CUSTOMER)**" subnet. + 8. The first available IP address "**192.168.46.1**" is automatically selected in the second drop-down menu of the list of IP addresses. This matches the results of the ``ip route ls`` command output on **srv05-nyc** we observed earlier. + 9. From the **Add Network Interface** drop-down menu put a check mark next to both network interfaces "**swp5(swp5 | srv05-nyc)@sw12-nyc (Demo)**" and "**swp5(swp5 | srv05-nyc)@sw21-nyc (Demo)**", which we can see are the interfaces where **srv05-nyc** is wired into when we reference the :ref:`"Sandbox Topology diagram"`. + + * The drop-down menu only contains these two network interfaces as they are the only interfaces that have been assigned to the **Demo** tenant. + + 10. Click the **Add** button. + 11. Click the **Add** button at the bottom right of the "**Add new V-Net**" window and the service will start provisioning. + +After just a few seconds, once fully provisioned, you will start seeing successful ping replies, similar in form to "**64 bytes from 192.168.46.1: icmp_seq=55 ttl=64 time=1.66 ms**", to the ping that was previously started in the terminal window, indicating that now the gateway address is accessible from host **srv05-nyc**. + +More details about V-Net (Ethernet/Vlan/VXlan) can be found on the the :ref:`"V-Network"` page. + +.. _s9-e-bgp: + +E-BGP (Exterior Border Gateway Protocol) +======================================== +Our internal network is already connected with the outside world so that our servers can communicate with the Internet through the E-BGP session with IRIS ISP1 named "**iris-isp1-example**". + +Optionally you can configure an E-BGP session to IRIS ISP2 for fault tolerance. + +* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) + + 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Network → E-BGP**. + 2. Click the **+ Add** button in the top right corner of the page to configure a new E-BGP session. + 3. Define a name in the **Name** field (e.g. ``iris-isp2-ipv4-customer``). + 4. From the **Site** drop-down menu, select "**US/NYC**". + 5. From the **BGP Router** drop-down menu, select "**SoftGate2**". + 6. From the **Switch Port** drop-down menu, select port "**swp16(swp16 | ISP2)@sw02-nyc (Admin)**" on the switch that is connected to the ISP2. + + * For the purposes of this exercise, the required switch port can easily be found by typing ``ISP2`` in the Search field. + + 7. For the **VLAN ID** field, type in ``1092`` while leaving the **Untagged** check-box unchecked. + 8. In the **Neighbor AS** field, type in ``65007``. + 9. In the **Local IP** field, type in ``50.117.59.118``. + 10. In the **Remote IP** field, type in ``50.117.59.117``. + 11. Expand the **Advanced** section + 12. In the **Prefix List Outbound** field, type in ``permit 50.117.59.192/28 le 32`` + 13. And finally click **Add** + +Allow up to 1 minute for both sides of the BGP sessions to come up and then the BGP state on **Network → E-BGP** page as well as on **Telescope → Dashboard** pages will turn green, indication a successfully established BGP session. We can glean further insight into the BGP session details by navigating to **Net → Looking Glass**. + + 1. Make sure "**vpc-1:Default**" is selected from the **VPC** drop-down menu. + 2. Select "**SoftGate2(50.117.59.193)**" (the border router where our newly created BGP session is terminated on) from the **Hardware** drop-down menu. + 3. Leaving the **Address Family** drop-down menu on "**Family: IPV4**" and the **Command** drop-down menu on "**Command: BGP Summary**", click on the **Submit** button. + +We are presented with the summary of the BGP sessions terminated on **SoftGate2**. You can also click on each BGP neighbor name to further see the "**Advertised routes**" and "**Routes**" received to/from that BGP neighbor. + +More details about E-BGP (Exterior Border Gateway Protocol) can be found on the the :ref:`"BGP"` page. + +.. _s9-nat: + +NAT (Network Address Translation) +================================= +Now that we have both internal and external facing services, we can aim for our **srv05-nyc** server to be able to communicate with the Internet. + +* In a terminal window: + + 1. SSH to server **srv05-nyc**: ``ssh demo@166.88.17.22 -p 30065``. + 2. Enter the password provided in the introductory e-mail. + 3. Start a ping session towards any public IP address (e.g. ``ping 1.1.1.1``). + 4. Keep the ping running as an indicator for when the service starts to work. + +Let's configure a Source NAT so our Customer subnet **192.168.46.0/24**, which is used in the V-Net services called **vnet-customer**, can communicate with the Internet. + +* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) + + 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Network → NAT**. + 2. Click the **+ Add** button in the top right corner of the page to define a new NAT rule. + 3. Define a name in the **Name** field (e.g. ``NAT Customer``). + 4. From the **Site** drop-down menu, select "**US/NYC**". + 5. From the **Action** drop-down menu, select "**SNAT**". + 6. Leave **ALL** selected in the **Protocol** drop-down menu. + 7. In the **Source Address** field, type in ``192.168.46.0/24``. + 8. In the **Destination Address** field, leave the default value of ``0.0.0.0/0``. + 9. Toggle the switch from **SNAT to Pool** to **SNAT to IP**. + 10. From the **Select subnet** drop-down menu, select the "**50.117.59.196/30 (NAT)**" subnet. + 11. From the **Select IP** drop-down menu, select the "**50.117.59.196/32**" IP address. + + * This public IP address is part of **50.117.59.196/30 (NAT)** subnet which is configured in the **Network → IPAM** section with the purpose of **NAT** and indicated in the **SoftGate** configurations to be used as a global IP for NAT by the :ref:`"Netris SoftGate Agent"`. + + 12. Click **Add** + +Soon you will start seeing replies similar in form to "**64 bytes from 1.1.1.1: icmp_seq=1 ttl=62 time=1.23 ms**" to the ping previously started in the terminal window, indicating that now the Internet is reachable from **srv05-nyc**. + +More details about NAT (Network Address Translation) can be found on the :ref:`"NAT"` page. + +.. _s9-l3lb: + +L3LB (Anycast L3 Load Balancer) +=============================== +In this exercise we will quickly configure an Anycast IP address in the Netris Controller for two of our :ref:`"ROH (Routing on the Host)"` servers (**srv01-nyc** & **srv02-nyc**) which both have a running **Web Server** configured to display a simple HTML webpage and observe **ECMP** load balancing it in action. + +* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) + + 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Services → ROH**. + 2. Click **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**srv01-nyc**" server. + 3. From the **IPv4** drop-down menu, select the "**50.117.59.200/30 (L3 LOAD BALANCER)**" subnet. + 4. From the second drop-down menu that appears to the right, select the first available IP "**50.117.59.200**". + 5. Check the **Anycast** check-box next to the previously selected IP and click the **Save** button. + 6. Repeat steps **3** through **4** for "**srv02-nyc**" by first clicking **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the "**srv02-nyc**" server. + + * While editing "**srv02-nyc**", after selecting the "**50.117.59.200**" IP address , the **Anycast** check-box will already be automatically checked as we had designated the IP address as such in step **5**. + +* In a new web browser window/tab: + + 1. Type in the Anycast IP address we just configured (**50.117.59.200**) into the browser's address bar or simply visit `http://50.117.59.200/ `_. + 2. Based on the unique hash calculated from factors such as source IP/Protocol/Port, the **L3LB** will use **ECMP** to load balance the traffic from your browser to either **srv01-nyc** or **srv02-nyc**, with the text on the website indicating where the traffic ended up. + + * It should be noted that the TCP session will continue to exist between the given end-user and server pair for the lifetime of the session. In our case we have landed on **srv01-nyc**. + +.. image:: /images/l3lb_srv01.png + :align: center + :alt: SRV01 L3LB + :target: ../../_images/l3lb_srv01.png + +In order to trigger the L3 load balancer to switch directing the traffic towards the other backend server (in this case from **srv01-nyc** to **srv02-nyc**, which based on the unique hash in your situation could be the other way around), we can simulate the unavailability of the backend server we ended up on by putting it in **Maintenance** mode. + +* Back in the Netris Controller, navigate to **Services → L3 Load Balancer**. + + 1. Expand the **LB Vip** that was created when we defined the **Anycast** IP address earlier by clicking on the **>** button to the left of "**50.117.59.200 (name_50.117.59.200)**". + 2. Click **Action v** to the right of the server you originally ended up on (in this case **srv01-nyc**). + 3. Click **Maintenance on**. + 4. Click **Maintenance** one more time in the pop-up window. + +* Back in the browser window/tab directed at the **50.117.59.200** Anycast IP address. + + 1. After just a few seconds, we can observe that now the website indicates that the traffic is routed to **srv02-nyc** (once more, your case could be opposite for you based on the original hash). + +.. image:: /images/l3lb_srv02.png + :align: center + :alt: SRV02 L3LB + :target: ../../_images/l3lb_srv02.png + +More details about AL3LB (Anycast L3 load balancer) can be found on the :ref:`"L3 Load Balancer (Anycast LB)"` page. + +.. _s9-acl: + +ACL (Access Control List) +========================= +Now that **srv05-nyc** can communicate with both internal and external hosts, let's check Access Policy and Control options. + +* In a terminal window: + + 1. SSH to server **srv05-nyc**: ``ssh demo@166.88.17.22 -p 30065``. + 2. Enter the password provided in the introductory e-mail. + 3. Start a ping session: ``ping 1.1.1.1``. + 4. If the previous steps were followed, you should see successful ping replies in the form of "**64 bytes from 1.1.1.1: icmp_seq=55 ttl=62 time=1.23 ms**". + 5. Keep the ping running as an indicator for when the service starts to work. + +* In a web browser: (*\*Fields not specified should remain unchanged and retain default values*) + + 1. Log into the Netris Controller by visiting `https://sandbox9.netris.io `_ and navigate to **Network → Sites**. + 2. Click **Edit** from the **Actions** menu indicated by three vertical dots (**⋮**) on the right side of the **UC/NYC** site. + 3. From the **ACL Default Policy** drop-down menu, change the value from "**Permit**" to "**Deny**". + 4. Click **Save**. + +Soon you will notice that there are no new replies to our previously started ``ping 1.1.1.1`` command in the terminal window, indicating that the **1.1.1.1** IP address is no longer reachable. Now that the **Default ACL Policy** is set to **Deny**, we need to configure an **ACL** entry that will allow the **srv05-nyc** server to communicate with the Internet. + +* Back in the web browser: (*\*Fields not specified should remain unchanged and retain default values*) + + 1. Navigate to **Services → ACL**. + 2. Click the **+ Add** button in the top right corner of the page to define a new ACL. + 3. Define a name in the **Name** field (e.g. ``V-Net Customer to WAN``). + 4. From the **Protocol** drop-down menu, select "**ALL**". + 5. In the Source field, type in ``192.168.46.0/24``. + 6. In the Destination field, type in ``0.0.0.0/0``. + 7. Click **Add**. + +You can observer the status of the syncing process by clicking on the **Syncing** yellow label at the top right of the **ACL** windown. Once the Netris Controller has finished syncing the new ACL policy with all relevant member devices, the label will turn green and read as **Synced**. Back in the terminal window we can observer that the replies to our ``ping 1.1.1.1`` command have resumed, indicating that the **srv05-nyc** server can communicate with the Internet once again.. + +More details about ACL (Access Control List) can be found on the :ref:`"ACL"` page. diff --git a/sandbox/Sandbox9/onprem-k8s.rst b/sandbox/Sandbox9/onprem-k8s.rst index 973ea4aa0e..a687587cdd 100644 --- a/sandbox/Sandbox9/onprem-k8s.rst +++ b/sandbox/Sandbox9/onprem-k8s.rst @@ -1,621 +1,638 @@ -.. _s9-k8s: - -*************************************** -Learn Netris operations with Kubernetes -*************************************** - -.. contents:: - :local: - -Intro -===== -This Sandbox environment provides an existing Kubernetes cluster that has been deployed via `Kubespray `_. For this scenario, we will be using the `external LB `_ option in Kubespray. A dedicated Netris L4LB service has been created in the Sandbox Controller to access the k8s apiservers from users and non-master nodes sides. - -.. image:: /images/sandbox-l4lb-kubeapi.png - :align: center - -To access the built-in Kubernetes cluster, put "Kubeconfig" file which you received by the introductory email into your ``~/.kube/config`` or set "KUBECONFIG" environment variable ``export KUBECONFIG=~/Downloads/config`` on your local machine. After that try to connect to the k8s cluster: - -.. code-block:: shell-session - - kubectl cluster-info - -The output below means you've successfully connected to the sandbox cluster: - -.. code-block:: shell-session - - Kubernetes master is running at https://api.k8s-sandbox9.netris.io:6443 - - To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. - -Install Netris Operator -======================= - -The first step to integrate the Netris Controller with the Kubernetes API is to install the Netris Operator. Installation can be accomplished by installing regular manifests or a `helm chart `_. For this example we will use the Kubernetes regular manifests: - -1. Install the latest Netris Operator: - -.. code-block:: shell-session - - kubectl apply -f https://github.com/netrisai/netris-operator/releases/latest/download/netris-operator.yaml - -2. Create credentials secret for Netris Operator: - -.. code-block:: shell-session - - kubectl -nnetris-operator create secret generic netris-creds \ - --from-literal=host='https://sandbox9.netris.io' \ - --from-literal=login='demo' --from-literal=password='Your Demo user pass' - -3. Inspect the pod logs and make sure the operator is connected to Netris Controller: - -.. code-block:: shell-session - - kubectl -nnetris-operator logs -l netris-operator=controller-manager --all-containers -f - -Example output demonstrating the successful operation of Netris Operator: - -.. code-block:: shell-session - - {"level":"info","ts":1629994653.6441543,"logger":"controller","msg":"Starting workers","reconcilerGroup":"k8s.netris.ai","reconcilerKind":"L4LB","controller":"l4lb","worker count":1} - -.. note:: - - After installing the Netris Operator, your Kubernetes cluster and physical network control planes are connected. - -Deploy an Application with an On-Demand Netris Load Balancer -============================================================ - -In this scenario we will be installing a simple application that requires a network load balancer: - -Install the application `"Podinfo" `_: - -.. code-block:: shell-session - - kubectl apply -k github.com/stefanprodan/podinfo/kustomize - -Get the list of pods and services in the default namespace: - -.. code-block:: shell-session - - kubectl get po,svc - -As you can see, the service type is "ClusterIP": - -.. code-block:: shell-session - - NAME READY STATUS RESTARTS AGE - pod/podinfo-576d5bf6bd-7z9jl 1/1 Running 0 49s - pod/podinfo-576d5bf6bd-nhlmh 1/1 Running 0 33s - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - service/podinfo ClusterIP 172.21.65.106 9898/TCP,9999/TCP 50s - -In order to request access from outside, change the type to "LoadBalancer": - -.. code-block:: shell-session - - kubectl patch svc podinfo -p '{"spec":{"type":"LoadBalancer"}}' - -Check the services again: - -.. code-block:: shell-session - - kubectl get svc - -Now we can see that the service type has changed to LoadBalancer, and "EXTERNAL-IP" switched to pending state: - -.. code-block:: shell-session - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - podinfo LoadBalancer 172.21.65.106 9898:32584/TCP,9999:30365/TCP 8m57s - -Going into the Netris Controller web interface, navigate to **Services → L4 Load Balancer**, and you may see L4LBs provisioning in real-time. If you do not see the provisioning process it is likely because it already completed. Look for the service with the name **"podinfo-xxxxxxxx"** - -.. image:: /images/sandbox-podinfo-prov.png - :align: center - -After provisioning has finished, let's one more time look at service in k8s: - -.. code-block:: shell-session - - kubectl get svc - -You can see that "EXTERNAL-IP" has been injected into Kubernetes: - -.. code-block:: shell-session - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - podinfo LoadBalancer 172.21.65.106 50.117.59.205 9898:32584/TCP,9999:30365/TCP 9m17s - -Let's try to curl it (remember to replace the IP below with the IP that has been assigned in the previous command): - -.. code-block:: shell-session - - curl 50.117.59.205:9898 - -The application is now accessible directly on the internet: - -.. code-block:: json - - { - "hostname": "podinfo-576d5bf6bd-nhlmh", - "version": "6.0.0", - "revision": "", - "color": "#34577c", - "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", - "message": "greetings from podinfo v6.0.0", - "goos": "linux", - "goarch": "amd64", - "runtime": "go1.16.5", - "num_goroutine": "8", - "num_cpu": "4" - } - -As seen, "PodInfo" developers decided to expose 9898 port for HTTP, let's switch it to 80: - -.. code-block:: shell-session - - kubectl patch svc podinfo --type='json' -p='[{"op": "replace", "path": "/spec/ports/0/port", "value":80}]' - -Wait a few seconds, you can see the provisioning process on the controller: - -.. image:: /images/sandbox-podinfo-ready.png - :align: center - -Curl again, without specifying a port: - -.. code-block:: shell-session - - curl 50.117.59.205 - -The output is similar to this: - -.. code-block:: json - - { - "hostname": "podinfo-576d5bf6bd-nhlmh", - "version": "6.0.0", - "revision": "", - "color": "#34577c", - "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", - "message": "greetings from podinfo v6.0.0", - "goos": "linux", - "goarch": "amd64", - "runtime": "go1.16.5", - "num_goroutine": "8", - "num_cpu": "4" - } - -You can also verify the application is reachable by putting this IP address directly into your browser. - -.. topic:: Milestone 1 - - Congratulations! You successfully deployed a network load balancer and exposed an application from your cloud to the internet. Time to get yourself an iced coffee. - - -Using Netris Custom Resources -============================= - -Introduction to Netris Custom Resources ---------------------------------------- - -In addition to provisioning on-demand network load balancers, Netris Operator can also provide automatic creation of network services based on Kubernetes CRD objects. Let's take a look at a few common examples: - -L4LB Custom Resource --------------------- - -In the previous section, when we changed the service type from "ClusterIP" to "LoadBalancer", Netris Operator detected a new request for a network load balancer, then it created L4LB custom resources. Let's see them: - -.. code-block:: shell-session - - kubectl get l4lb - -As you can see, there are two L4LB resources, one for each podinfo's service port: - -.. code-block:: shell-session - - NAME STATE FRONTEND PORT SITE TENANT STATUS AGE - podinfo-default-66d44feb-0278-412a-a32d-73afe011f2c6-tcp-80 active 50.117.59.205 80/TCP US/NYC Admin OK 33m - podinfo-default-66d44feb-0278-412a-a32d-73afe011f2c6-tcp-9999 active 50.117.59.205 9999/TCP US/NYC Admin OK 32m - -You can't edit/delete them, because Netris Operator will recreate them based on what was originally deployed in the service specifications. - -Instead, let's create a new load balancer using the CRD method. This method allows us to create L4 load balancers for services outside of what is being created natively with the Kubernetes service schema. Our new L4LB's backends will be "srv04-nyc" & "srv05-nyc" on TCP port 80. These servers are already running the Nginx web server, with the hostname present in the index.html file. - -Create a yaml file: - -.. code-block:: shell-session - - cat << EOF > srv04-5-nyc-http.yaml - apiVersion: k8s.netris.ai/v1alpha1 - kind: L4LB - metadata: - name: srv04-5-nyc-http - spec: - ownerTenant: Admin - site: US/NYC - state: active - protocol: tcp - frontend: - port: 80 - backend: - - 192.168.45.64:80 - - 192.168.46.65:80 - check: - type: tcp - timeout: 3000 - EOF - -And apply it: - -.. code-block:: shell-session - - kubectl apply -f srv04-5-nyc-http.yaml - -Inspect the new L4LB resources via kubectl: - -.. code-block:: shell-session - - kubectl get l4lb - -As you can see, provisioning started: - -.. code-block:: shell-session - - NAME STATE FRONTEND PORT SITE TENANT STATUS AGE - podinfo-default-d07acd0f-51ea-429a-89dd-8e4c1d6d0a86-tcp-80 active 50.117.59.205 80/TCP US/NYC Admin OK 2m17s - podinfo-default-d07acd0f-51ea-429a-89dd-8e4c1d6d0a86-tcp-9999 active 50.117.59.205 9999/TCP US/NYC Admin OK 3m47s - srv04-5-nyc-http active 50.117.59.206 80/TCP US/NYC Admin Provisioning 6s - -When provisioning is finished, you should be able to connect to L4LB. Try to curl, using the L4LB frontend address displayed in the above command output: - -.. code-block:: shell-session - - curl 50.117.59.206 - -You will see the servers' hostname in curl output: - -.. code-block:: shell-session - - SRV04-NYC - -You can also inspect the L4LB in the Netris Controller web interface: - -.. image:: /images/sandbox-l4lbs.png - :align: center - -V-Net Custom Resource ---------------------- - -If one of the backend health-checks is marked as unhealthy like in the screenshot above, it means you didn't create "vnet-customer" V-Net as described in the :ref:`"Learn by Creating Services"` manual. If that's the case, let's create it from Kubernetes using the V-Net custom resource. - -Let's create our V-Net manifest: - -.. code-block:: shell-session - - cat << EOF > vnet-customer.yaml - apiVersion: k8s.netris.ai/v1alpha1 - kind: VNet - metadata: - name: vnet-customer - spec: - ownerTenant: Demo - guestTenants: [] - sites: - - name: US/NYC - gateways: - - 192.168.46.1/24 - switchPorts: - - name: swp2@sw22-nyc - EOF - -And apply it: - -.. code-block:: shell-session - - kubectl apply -f vnet-customer.yaml - -Let's check our V-Net resources in Kubernetes: - -.. code-block:: shell-session - - kubectl get vnet - -As you can see, provisioning for our new V-Net has started: - -.. code-block:: shell-session - - NAME STATE GATEWAYS SITES OWNER STATUS AGE - vnet-customer active 192.168.46.1/24 US/NYC Demo Active 10s - -After provisioning has completed, the L4LB's checks should work for both backend servers, and incoming requests should be balanced between them. - -Let's curl several times to see that: - -.. code-block:: shell-session - - curl 50.117.59.206 - -As we can see, the curl request shows the behavior of "round robin" between the backends: - -.. code-block:: shell-session - - SRV05-NYC - curl 50.117.59.206 - - SRV05-NYC - curl 50.117.59.206 - - SRV05-NYC - curl 50.117.59.206 - - SRV04-NYC - -.. note:: - - *If intermittently the result of the curl command is "Connection timed out", it is likely that the request went to the srv05-nyc backend, and the "Default ACL Policy" is set to "Deny". To remedy this, configure an ACL entry that will allow the srv05-nyc server to communicate with external addresses. For step-by-step instruction review the* :ref:`ACL documentation`. - -BTW, if you already created "vnet-customer" V-Net as described in the :ref:`"Learn by Creating Services"`, you may import that to k8s, by adding ``resource.k8s.netris.ai/import: "true"`` annotation in V-Net manifest, the manifest should look like this: - -.. code-block:: shell-session - - cat << EOF > vnet-customer.yaml - apiVersion: k8s.netris.ai/v1alpha1 - kind: VNet - metadata: - name: vnet-customer - annotations: - resource.k8s.netris.ai/import: "true" - spec: - ownerTenant: Demo - guestTenants: [] - sites: - - name: US/NYC - gateways: - - 192.168.46.1/24 - switchPorts: - - name: swp2@sw22-nyc - EOF - -Apply it: - -.. code-block:: shell-session - - kubectl apply -f vnet-customer.yaml - -After applying the manifest containing "import" annotation, the V-Net, created from the Netris Controller web interface, will appear in k8s and you will be able to manage it from Kubernetes. - -.. code-block:: shell-session - - kubectl get vnet - - NAME STATE GATEWAYS SITES OWNER STATUS AGE - vnet-customer active 192.168.46.1/24 US/NYC Demo Active 7s - -BGP Custom Resource -------------------- - -Let's create a new BGP peer, that is listed in the :ref:`"Learn by Creating Services"`. - -Create a yaml file: - -.. code-block:: shell-session - - cat << EOF > iris-isp2-ipv4-customer.yaml - apiVersion: k8s.netris.ai/v1alpha1 - kind: BGP - metadata: - name: iris-isp2-ipv4-customer - spec: - site: US/NYC - hardware: SoftGate2 - neighborAs: 65007 - transport: - name: swp16@sw02-nyc - vlanId: 1092 - localIP: 50.117.59.118/30 - remoteIP: 50.117.59.117/30 - description: Example BGP to ISP2 - prefixListInbound: - - permit 0.0.0.0/0 - prefixListOutbound: - - permit 50.117.59.192/28 le 32 - EOF - -And apply it: - -.. code-block:: shell-session - - kubectl apply -f iris-isp2-ipv4-customer.yaml - -Check created BGP: - -.. code-block:: shell-session - - kubectl get bgp - -Allow up to 1 minute for both sides of the BGP sessions to come up: - -.. code-block:: shell-session - - NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE - iris-isp2-ipv4-customer enabled Link Up 65007 50.117.59.118/30 50.117.59.117/30 15s - -Then check the state again: - -.. code-block:: shell-session - - kubectl get bgp - -The output is similar to this: - -.. code-block:: shell-session - - NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE - iris-isp2-ipv4-customer enabled bgp: Established; prefix: 160; time: 00:01:27 Link Up 65007 50.117.59.118/30 50.117.59.117/30 2m3s - -Feel free to use the import annotation for this BGP if you created it from the Netris Controller web interface previously. - -Return to the Netris Controller and navigate to **Net → Topology** to see the new BGP neighbor you created. - -Importing Existing Resources from Netris Controller to Kubernetes ------------------------------------------------------------------ - -You can import any custom resources already created from the Netris Controller to k8s by adding the following annotation: - -.. code-block:: yaml - - resource.k8s.netris.ai/import: "true" - -Otherwise, if try to apply them without the "import" annotation, the Netris Operator will complain that the resource with such name or specs already exists. - -After importing resources to k8s, they will belong to the Netris Operator, and you won't be able to edit/delete them directly from the Netris Controller web interface, because the Netris Operator will put everything back, as declared in the custom resources. - -Reclaim Policy --------------- - -There is also one useful annotation. So suppose you want to remove some custom resource from k8s, and want to prevent its deletion from the Netris Controller, for that you can use "reclaimPolicy" annotation: - -.. code-block:: yaml - - resource.k8s.netris.ai/reclaimPolicy: "retain" - -Just add this annotation in any custom resource while creating it. Or if the custom resource has already been created, change the ``"delete"`` value to ``"retain"`` for key ``resource.k8s.netris.ai/reclaimPolicy`` in the resource annotation. After that, you'll be able to delete any Netris Custom Resource from Kubernetes, and it won't be deleted from the Netris Controller. - -.. seealso:: - - See all options and examples for Netris Custom Resources `here `_. - - -Netris Calico CNI Integration -============================= - -Netris Operator can integrate with Calico CNI, in your Sandbox k8s cluster, Calico has already been configured as the CNI, so you can try this integration. It will automatically create BGP peering between cluster nodes and the leaf/TOR switch for each node, then to clean up it will disable Calico Node-to-Node mesh. To understand why you need to configure peering between Kubernetes nodes and the leaf/TOR switch, and why you should disable Node-to-Node mesh, review the `calico docs `_. - -Integration is very simple, you just need to add the annotation in calico's ``bgpconfigurations`` custom resource. Before doing that, let's see the current state of ``bgpconfigurations``: - -.. code-block:: shell-session - - kubectl get bgpconfigurations default -o yaml - -As we can see, ``nodeToNodeMeshEnabled`` is enabled, and ``asNumber`` is 64512 (it's Calico default AS number): - -.. code-block:: yaml - - apiVersion: crd.projectcalico.org/v1 - kind: BGPConfiguration - metadata: - annotations: - ... - name: default - ... - spec: - asNumber: 64512 - logSeverityScreen: Info - nodeToNodeMeshEnabled: true - -Let's enable the "netris-calico" integration: - -.. code-block:: shell-session - - kubectl annotate bgpconfigurations default manage.k8s.netris.ai/calico='true' - -Let's check our BGP resources in k8s: - -.. code-block:: shell-session - - kubectl get bgp - -Here are our freshly created BGPs, one for each k8s node: - -.. code-block:: shell-session - - NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE - iris-isp2-ipv4-customer enabled bgp: Established; prefix: 160; time: 00:06:18 Link Up 65007 50.117.59.118/30 50.117.59.117/30 7m59s - sandbox9-srv06-nyc-192.168.110.66 enabled 4230000000 192.168.110.1/24 192.168.110.66/24 26s - sandbox9-srv07-nyc-192.168.110.67 enabled 4230000001 192.168.110.1/24 192.168.110.67/24 26s - sandbox9-srv08-nyc-192.168.110.68 enabled 4230000002 192.168.110.1/24 192.168.110.68/24 26s - -You might notice that peering neighbor AS is different from Calico's default 64512. The is because the Netris Operator is setting a particular AS number for each node. - -Allow up to 1 minute for the BGP sessions to come up, then check BGP resources again: - -.. code-block:: shell-session - - kubectl get bgp - -As we can see, our BGP peers have become established: - -.. code-block:: shell-session - - NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE - iris-isp2-ipv4-customer enabled bgp: Established; prefix: 160; time: 00:07:48 Link Up 65007 50.117.59.118/30 50.117.59.117/30 8m41s - sandbox9-srv06-nyc-192.168.110.66 enabled bgp: Established; prefix: 5; time: 00:00:44 N/A 4230000000 192.168.110.1/24 192.168.110.66/24 68s - sandbox9-srv07-nyc-192.168.110.67 enabled bgp: Established; prefix: 5; time: 00:00:19 N/A 4230000001 192.168.110.1/24 192.168.110.67/24 68s - sandbox9-srv08-nyc-192.168.110.68 enabled bgp: Established; prefix: 5; time: 00:00:44 N/A 4230000002 192.168.110.1/24 192.168.110.68/24 68s - -Now let's check if ``nodeToNodeMeshEnabled`` is still enabled: - -.. code-block:: shell-session - - kubectl get bgpconfigurations default -o yaml - -It is disabled, which means the "netris-calico" integration process is finished: - -.. code-block:: yaml - - apiVersion: crd.projectcalico.org/v1 - kind: BGPConfiguration - metadata: - annotations: - manage.k8s.netris.ai/calico: "true" - ... - name: default - ... - spec: - asNumber: 64512 - nodeToNodeMeshEnabled: false - -.. note:: - - Netris Operator won't disable Node-to-Node mesh until all BGP peers of all the nodes in the k8s cluster become established. - -Finally, let's check if our earlier deployed "Podinfo" application is still working when Calico Node-to-Node mesh is disabled: - -.. code-block:: shell-session - - curl 50.117.59.205 - -Yes, it works: - -.. code-block:: json - - { - "hostname": "podinfo-576d5bf6bd-mfpdt", - "version": "6.0.0", - "revision": "", - "color": "#34577c", - "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", - "message": "greetings from podinfo v6.0.0", - "goos": "linux", - "goarch": "amd64", - "runtime": "go1.16.5", - "num_goroutine": "8", - "num_cpu": "4" - } - -Disabling Netris-Calico Integration ------------------------------------ - -To disable "Netris-Calico" integration, delete the annotation from Calico's ``bgpconfigurations`` resource: - -.. code-block:: shell-session - - kubectl annotate bgpconfigurations default manage.k8s.netris.ai/calico- - -or change its value to ``"false"``. - -.. topic:: Milestone 2 - - Congratulations! You completed Milestone 2. Time to get yourself another iced coffee or even a beer depending on what time it is! +.. _s9-k8s: + +*************************************** +Learn Netris Operations with Kubernetes +*************************************** + +.. contents:: + :local: + +Intro +===== +The Sandbox environment offers a pre-existing 3 node Kubernetes cluster deployed through K3S in `HA Mode `_. To enable user access to the Kubernetes API, a dedicated Netris L4LB service has been created in the Sandbox Controller. Furthermore, this L4LB address serves as ``K3S_URL`` environment variable for all nodes within the cluster. + +.. image:: /images/sandbox-l4lb-kubeapi.png + :align: center + :alt: Sandbox L4LB KubeAPI + :target: ../../_images/sandbox-l4lb-kubeapi.png + +To access the built-in Kubernetes cluster, put the "Kubeconfig" file which you received via the introductory email into your ``~/.kube/config`` or set "KUBECONFIG" environment variable using ``export KUBECONFIG=~/Downloads/config`` on your local machine. Afterwards, try to connect to the k8s cluster: + +.. code-block:: shell-session + + kubectl cluster-info + +If your output matches the one below, that means you've successfully connected to the Sandbox cluster: + +.. code-block:: shell-session + + Kubernetes control plane is running at https://api.k8s-sandbox9.netris.io:6443 + CoreDNS is running at https://api.k8s-sandbox9.netris.io:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy + Metrics-server is running at https://api.k8s-sandbox9.netris.io:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy + + To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. + +Install Netris Operator +======================= + +The first step to integrate the Netris Controller with the Kubernetes API is to install the Netris Operator. Installation can be accomplished by installing a regular manifests or a `helm chart `_. For this example we will use the Kubernetes regular manifests: + +1. Install the latest Netris Operator: + +.. code-block:: shell-session + + kubectl apply -f https://github.com/netrisai/netris-operator/releases/latest/download/netris-operator.yaml + +2. Create credentials secret for Netris Operator: + +.. code-block:: shell-session + + kubectl -nnetris-operator create secret generic netris-creds \ + --from-literal=host='https://sandbox9.netris.io' \ + --from-literal=login='demo' --from-literal=password='Your Demo user pass' + +3. Inspect the pod logs and make sure the operator is connected to Netris Controller: + +.. code-block:: shell-session + + kubectl -nnetris-operator logs -l netris-operator=controller-manager --all-containers -f + +Example output demonstrating the successful operation of Netris Operator: + +.. code-block:: shell-session + + {"level":"info","ts":1629994653.6441543,"logger":"controller","msg":"Starting workers","reconcilerGroup":"k8s.netris.ai","reconcilerKind":"L4LB","controller":"l4lb","worker count":1} + +.. note:: + + After installing the Netris Operator, your Kubernetes cluster and physical network control planes are connected. + +Deploy an Application with an On-Demand Netris Load Balancer +============================================================ + +In this scenario we will be installing a simple application that requires a network load balancer: + +Install the application `"Podinfo" `_: + +.. code-block:: shell-session + + kubectl apply -k github.com/stefanprodan/podinfo/kustomize + +Get the list of pods and services in the default namespace: + +.. code-block:: shell-session + + kubectl get po,svc + +As you can see, the service type is "ClusterIP": + +.. code-block:: shell-session + + NAME READY STATUS RESTARTS AGE + pod/podinfo-7cf557d9d7-6gfwx 1/1 Running 0 34s + pod/podinfo-7cf557d9d7-nb2t7 1/1 Running 0 18s + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/kubernetes ClusterIP 10.43.0.1 443/TCP 33m + service/podinfo ClusterIP 10.43.68.103 9898/TCP,9999/TCP 35s + +In order to request access from outside, change the type to "LoadBalancer": + +.. code-block:: shell-session + + kubectl patch svc podinfo -p '{"spec":{"type":"LoadBalancer"}}' + +Check the services again: + +.. code-block:: shell-session + + kubectl get svc + +Now we can see that the service type has changed to LoadBalancer, and "EXTERNAL-IP" switched to pending state: + +.. code-block:: shell-session + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.43.0.1 443/TCP 37m + podinfo LoadBalancer 10.43.68.103 9898:32486/TCP,9999:30455/TCP 3m45s + +Going into the Netris Controller web interface, navigate to **Services → L4 Load Balancer**, and you may see L4LBs provisioning in real-time. If you do not see the provisioning process it is likely because it already completed. Look for the service with the name **"podinfo-xxxxxxxx"** + +.. image:: /images/sandbox-podinfo-prov.png + :align: center + :alt: Sandbox PodInfo Provisioning + :target: ../../_images/sandbox-podinfo-prov.png + +After provisioning has finished, let's one more time look at service in k8s: + +.. code-block:: shell-session + + kubectl get svc + +You can see that "EXTERNAL-IP" has been injected into Kubernetes: + +.. code-block:: shell-session + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.43.0.1 443/TCP 29m + podinfo LoadBalancer 10.43.42.190 50.117.59.205 9898:30771/TCP,9999:30510/TCP 5m14s + +Let's try to curl it (remember to replace the IP below with the IP that has been assigned in the previous command): + +.. code-block:: shell-session + + curl 50.117.59.205:9898 + +The application is now accessible directly on the internet: + +.. code-block:: json + + { + "hostname": "podinfo-7cf557d9d7-6gfwx", + "version": "6.6.0", + "revision": "357009a86331a987811fefc11be1350058da33fc", + "color": "#34577c", + "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", + "message": "greetings from podinfo v6.6.0", + "goos": "linux", + "goarch": "amd64", + "runtime": "go1.21.7", + "num_goroutine": "8", + "num_cpu": "2" + } + +As seen, "PodInfo" developers decided to expose 9898 port for HTTP, let's switch it to 80: + +.. code-block:: shell-session + + kubectl patch svc podinfo --type='json' -p='[{"op": "replace", "path": "/spec/ports/0/port", "value":80}]' + +Wait a few seconds, you can see the provisioning process on the controller: + +.. image:: /images/sandbox-podinfo-ready.png + :align: center + :alt: Sandbox PodInfo Ready + :target: ../../_images/sandbox-podinfo-ready.png + +Curl again, without specifying a port: + +.. code-block:: shell-session + + curl 50.117.59.205 + +The output is similar to this: + +.. code-block:: json + + { + "hostname": "podinfo-7cf557d9d7-6gfwx", + "version": "6.6.0", + "revision": "357009a86331a987811fefc11be1350058da33fc", + "color": "#34577c", + "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", + "message": "greetings from podinfo v6.6.0", + "goos": "linux", + "goarch": "amd64", + "runtime": "go1.21.7", + "num_goroutine": "8", + "num_cpu": "2" + } + +You can also verify the application is reachable by putting this IP address directly into your browser. + +.. topic:: Milestone 1 + + Congratulations! You successfully deployed a network load balancer and exposed an application from your cloud to the internet. Time to get yourself an iced coffee. + + +Using Netris Custom Resources +============================= + +Introduction to Netris Custom Resources +--------------------------------------- + +In addition to provisioning on-demand network load balancers, Netris Operator can also provide automatic creation of network services based on Kubernetes CRD objects. Let's take a look at a few common examples: + +L4LB Custom Resource +-------------------- + +In the previous section, when we changed the service type from "ClusterIP" to "LoadBalancer", Netris Operator detected a new request for a network load balancer, then it created L4LB custom resources. Let's see them: + +.. code-block:: shell-session + + kubectl get l4lb + +As you can see, there are two L4LB resources, one for each podinfo's service port: + +.. code-block:: shell-session + + NAME STATE FRONTEND PORT SITE TENANT STATUS AGE + podinfo-default-5bdf0a53-027d-449f-8896-547e06028c6b-tcp-80 active 50.117.59.205 80/TCP US/NYC Admin OK 7m21s + podinfo-default-5bdf0a53-027d-449f-8896-547e06028c6b-tcp-9999 active 50.117.59.205 9999/TCP US/NYC Admin OK 15m + +You can't edit/delete them, because Netris Operator will recreate them based on what was originally deployed in the service specifications. + +Instead, let's create a new load balancer using the CRD method. This method allows us to create L4 load balancers for services outside of what is being created natively with the Kubernetes service schema. Our new L4LB's backends will be "srv04-nyc" & "srv05-nyc" on TCP port 80. These servers are already running the Nginx web server, with the hostname present in the index.html file. + +Create a yaml file: + +.. code-block:: shell-session + + cat << EOF > srv04-5-nyc-http.yaml + apiVersion: k8s.netris.ai/v1alpha1 + kind: L4LB + metadata: + name: srv04-5-nyc-http + spec: + ownerTenant: Admin + site: US/NYC + state: active + protocol: tcp + frontend: + port: 80 + backend: + - 192.168.45.64:80 + - 192.168.46.65:80 + check: + type: tcp + timeout: 3000 + EOF + +And apply it: + +.. code-block:: shell-session + + kubectl apply -f srv04-5-nyc-http.yaml + +Inspect the new L4LB resources via kubectl: + +.. code-block:: shell-session + + kubectl get l4lb + +As you can see, provisioning started: + +.. code-block:: shell-session + + NAME STATE FRONTEND PORT SITE TENANT STATUS AGE + podinfo-default-5bdf0a53-027d-449f-8896-547e06028c6b-tcp-80 active 50.117.59.205 80/TCP US/NYC Admin OK 9m56s + podinfo-default-5bdf0a53-027d-449f-8896-547e06028c6b-tcp-9999 active 50.117.59.205 9999/TCP US/NYC Admin OK 17m + srv04-5-nyc-http active 50.117.59.206 80/TCP US/NYC Admin Provisioning 5s + +When provisioning is finished, you should be able to connect to L4LB. Try to curl, using the L4LB frontend address displayed in the above command output: + +.. code-block:: shell-session + + curl 50.117.59.206 + +You will see the servers' hostname in curl output: + +.. code-block:: shell-session + + SRV04-NYC + +You can also inspect the L4LB in the Netris Controller web interface: + +.. image:: /images/sandbox-l4lbs.png + :align: center + :alt: Sandbox L4LBs + :target: ../../_images/sandbox-l4lbs.png + +V-Net Custom Resource +--------------------- + +If one of the backend health-checks is marked as unhealthy like in the screenshot above, it means you didn't create "vnet-customer" V-Net as described in the :ref:`"Learn by Creating Services"` manual. If that's the case, let's create it from Kubernetes using the V-Net custom resource. + +Let's create our V-Net manifest: + +.. code-block:: shell-session + + cat << EOF > vnet-customer.yaml + apiVersion: k8s.netris.ai/v1alpha1 + kind: VNet + metadata: + name: vnet-customer + spec: + ownerTenant: Demo + guestTenants: [] + vlanId: "46" + sites: + - name: US/NYC + gateways: + - prefix: 192.168.46.1/24 + switchPorts: + - name: swp5@sw12-nyc + untagged: "no" + - name: swp5@sw21-nyc + untagged: "no" + EOF + +And apply it: + +.. code-block:: shell-session + + kubectl apply -f vnet-customer.yaml + +Let's check our V-Net resources in Kubernetes: + +.. code-block:: shell-session + + kubectl get vnet + +As you can see, provisioning for our new V-Net has started: + +.. code-block:: shell-session + + NAME STATE GATEWAYS SITES OWNER STATUS AGE + vnet-customer active 192.168.46.1/24 US/NYC Demo Active 10s + +After provisioning has completed, the L4LB's checks should work for both backend servers, and incoming requests should be balanced between them. + +Let's curl several times to see that: + +.. code-block:: shell-session + + curl 50.117.59.206 + +As we can see, the curl request shows the behavior of "round robin" between the backends: + +.. code-block:: shell-session + + SRV05-NYC + curl 50.117.59.206 + + SRV05-NYC + curl 50.117.59.206 + + SRV04-NYC + curl 50.117.59.206 + + SRV04-NYC + +.. note:: + + *If intermittently the result of the curl command is "Connection timed out", it is likely that the request went to the srv05-nyc backend, and the "Default ACL Policy" is set to "Deny". To remedy this, configure an ACL entry that will allow the srv05-nyc server to communicate with external addresses. For step-by-step instruction review the* :ref:`ACL documentation`. + +BTW, if you already created "vnet-customer" V-Net as described in the :ref:`"Learn by Creating Services"`, you may import that to k8s, by adding ``resource.k8s.netris.ai/import: "true"`` annotation in V-Net manifest, the manifest should look like this: + +.. code-block:: shell-session + + cat << EOF > vnet-customer.yaml + apiVersion: k8s.netris.ai/v1alpha1 + kind: VNet + metadata: + name: vnet-customer + annotations: + resource.k8s.netris.ai/import: "true" + spec: + ownerTenant: Demo + guestTenants: [] + vlanId: "46" + sites: + - name: US/NYC + gateways: + - prefix: 192.168.46.1/24 + switchPorts: + - name: swp5@sw12-nyc + untagged: "no" + - name: swp5@sw21-nyc + untagged: "no" + EOF + +Apply it: + +.. code-block:: shell-session + + kubectl apply -f vnet-customer.yaml + +After applying the manifest containing "import" annotation, the V-Net, created from the Netris Controller web interface, will appear in k8s and you will be able to manage it from Kubernetes. + +.. code-block:: shell-session + + kubectl get vnet + + NAME STATE GATEWAYS SITES OWNER STATUS AGE + vnet-customer active 192.168.46.1/24 US/NYC Demo Active 2m + +BGP Custom Resource +------------------- + +Let's create a new BGP peer, that is listed in the :ref:`"Learn by Creating Services"`. + +Create a yaml file: + +.. code-block:: shell-session + + cat << EOF > iris-isp2-ipv4-customer.yaml + apiVersion: k8s.netris.ai/v1alpha1 + kind: BGP + metadata: + name: iris-isp2-ipv4-customer + spec: + site: US/NYC + hardware: SoftGate2 + neighborAs: 65007 + transport: + name: swp16@sw02-nyc + vlanId: 1092 + localIP: 50.117.59.118/30 + remoteIP: 50.117.59.117/30 + description: Example BGP to ISP2 + prefixListOutbound: + - permit 50.117.59.192/28 le 32 + EOF + +And apply it: + +.. code-block:: shell-session + + kubectl apply -f iris-isp2-ipv4-customer.yaml + +Check created BGP: + +.. code-block:: shell-session + + kubectl get bgp + +Allow up to 1 minute for both sides of the BGP sessions to come up: + +.. code-block:: shell-session + + NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE + iris-isp2-ipv4-customer enabled Link Up 65007 50.117.59.118/30 50.117.59.117/30 15s + +Then check the state again: + +.. code-block:: shell-session + + kubectl get bgp + +The output is similar to this: + +.. code-block:: shell-session + +NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE +iris-isp2-ipv4-customer enabled bgp: Established; prefix: 957240; time: 00:04:02 65007 50.117.59.118/30 50.117.59.117/30 2m3s + +Feel free to use the import annotation for this BGP if you created it from the Netris Controller web interface previously. + +Return to the Netris Controller and navigate to **Network → Topology** to see the new BGP neighbor you created. + +Importing Existing Resources from Netris Controller to Kubernetes +----------------------------------------------------------------- + + You can import any custom resources already created from the Netris Controller to k8s by adding the following annotation: + +.. code-block:: yaml + + resource.k8s.netris.ai/import: "true" + +Otherwise, if try to apply them without the "import" annotation, the Netris Operator will complain that the resource with such name or specs already exists. + +After importing resources to k8s, they will belong to the Netris Operator, and you won't be able to edit/delete them directly from the Netris Controller web interface, because the Netris Operator will put everything back, as declared in the custom resources. + +Reclaim Policy +-------------- + +There is also one useful annotation. So suppose you want to remove some custom resource from k8s, and want to prevent its deletion from the Netris Controller, for that you can use "reclaimPolicy" annotation: + +.. code-block:: yaml + + resource.k8s.netris.ai/reclaimPolicy: "retain" + +Just add this annotation in any custom resource while creating it. Or if the custom resource has already been created, change the ``"delete"`` value to ``"retain"`` for key ``resource.k8s.netris.ai/reclaimPolicy`` in the resource annotation. After that, you'll be able to delete any Netris Custom Resource from Kubernetes, and it won't be deleted from the Netris Controller. + +.. seealso:: + + See all options and examples for Netris Custom Resources `here `_. + + +Netris Calico CNI Integration +============================= + +Netris Operator can integrate with Calico CNI, in your Sandbox k8s cluster, Calico has already been configured as the CNI, so you can try this integration. It will automatically create BGP peering between cluster nodes and the leaf/TOR switch for each node, then to clean up it will disable Calico Node-to-Node mesh. To understand why you need to configure peering between Kubernetes nodes and the leaf/TOR switch, and why you should disable Node-to-Node mesh, review the `Calico docs `_. + +Integration is very simple, you just need to add the annotation in calico's ``bgpconfigurations`` custom resource. Before doing that, let's see the current state of ``bgpconfigurations``: + +.. code-block:: shell-session + + kubectl get bgpconfigurations default -o yaml + +As we can see, ``nodeToNodeMeshEnabled`` is enabled: + +.. code-block:: yaml + + apiVersion: projectcalico.org/v3 + kind: BGPConfiguration + metadata: + annotations: + ... + name: default + ... + spec: + nodeToNodeMeshEnabled: true + +Let's enable the "netris-calico" integration: + +.. code-block:: shell-session + + kubectl annotate bgpconfigurations default manage.k8s.netris.ai/calico='true' + +Let's check our BGP resources in k8s: + +.. code-block:: shell-session + + kubectl get bgp + +Here are our freshly created BGPs, one for each k8s node: + +.. code-block:: shell-session + + NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE + iris-isp2-ipv4-customer enabled bgp: Established; prefix: 957241; time: 00:15:03 65007 50.117.59.118/30 50.117.59.117/30 16m + sandbox-srv06-192.168.110.66 enabled 4230000000 192.168.110.1/24 192.168.110.66/24 37s + sandbox-srv07-192.168.110.67 enabled 4230000001 192.168.110.1/24 192.168.110.67/24 37s + sandbox-srv08-192.168.110.68 enabled 4230000002 192.168.110.1/24 192.168.110.68/24 37s + +You might notice that peering neighbor AS is different from Calico's default 64512. The is because the Netris Operator is setting a particular AS number for each node. + +Allow up to 1 minute for the BGP sessions to come up, then check BGP resources again: + +.. code-block:: shell-session + + kubectl get bgp + +As we can see, our BGP peers have become established: + +.. code-block:: shell-session + + NAME STATE BGP STATE PORT STATE NEIGHBOR AS LOCAL ADDRESS REMOTE ADDRESS AGE + iris-isp2-ipv4-customer enabled bgp: Established; prefix: 957194; time: 00:18:24 65007 50.117.59.118/30 50.117.59.117/30 19m + sandbox-srv06-192.168.110.66 enabled bgp: Established; prefix: 1; time: 00:01:26 N/A 4230000000 192.168.110.1/24 192.168.110.66/24 2m7s + sandbox-srv07-192.168.110.67 enabled bgp: Established; prefix: 1; time: 00:01:26 N/A 4230000001 192.168.110.1/24 192.168.110.67/24 2m7s + sandbox-srv08-192.168.110.68 enabled bgp: Established; prefix: 1; time: 00:01:26 N/A 4230000002 192.168.110.1/24 192.168.110.68/24 2m7s + +Now let's check if ``nodeToNodeMeshEnabled`` is still enabled: + +.. code-block:: shell-session + + kubectl get bgpconfigurations default -o yaml + +It is disabled, which means the "netris-calico" integration process is finished: + +.. code-block:: yaml + + apiVersion: projectcalico.org/v3 + kind: BGPConfiguration + metadata: + annotations: + ... + manage.k8s.netris.ai/calico: "true" + ... + name: default + ... + spec: + nodeToNodeMeshEnabled: false + +.. note:: + + Netris Operator won't disable Node-to-Node mesh until all BGP peers of all the nodes in the k8s cluster become established. + +Finally, let's check if our earlier deployed "Podinfo" application is still working when Calico Node-to-Node mesh is disabled: + +.. code-block:: shell-session + + curl 50.117.59.205 + +Yes, it works: + +.. code-block:: json + + { + "hostname": "podinfo-7cf557d9d7-nb2t7", + "version": "6.6.0", + "revision": "357009a86331a987811fefc11be1350058da33fc", + "color": "#34577c", + "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif", + "message": "greetings from podinfo v6.6.0", + "goos": "linux", + "goarch": "amd64", + "runtime": "go1.21.7", + "num_goroutine": "8", + "num_cpu": "2" + } + +Disabling Netris-Calico Integration +----------------------------------- + +To disable "Netris-Calico" integration, delete the annotation from Calico's ``bgpconfigurations`` resource: + +.. code-block:: shell-session + + kubectl annotate bgpconfigurations default manage.k8s.netris.ai/calico- + +or change its value to ``"false"``. + +.. topic:: Milestone 2 + + Congratulations on completing Milestone 2! diff --git a/sandbox/Sandbox9/sandbox-info.rst b/sandbox/Sandbox9/sandbox-info.rst index b9998b2d5b..5b6eac6275 100644 --- a/sandbox/Sandbox9/sandbox-info.rst +++ b/sandbox/Sandbox9/sandbox-info.rst @@ -1,134 +1,137 @@ -************************* -Welcome to Netris Sandbox -************************* - -Netris Sandbox is a ready-to-use environment for testing Netris automatic NetOps. -We have pre-created some example services for you, details of which can be found in the :ref:`"Provided Example Configurations"` document. Feel free to view, edit, delete, and create new services. In case of any questions, reach out to us on `Slack `__. - -The credentials for the sandbox have been provided to you by email in response to your Sandbox request. - -The Sandbox environment includes: - - -* :ref:`Netris Controller`: A cloud-hosted Netris Controller, loaded with examples. -* :ref:`Switching fabric`: Two spine switches and four leaf switches, all operated by Netris. -* :ref:`SoftGates`: Two SoftGate gateway nodes for border routing, L4 Load Balancing, site-to-site VPN, and NAT. Both operated by Netris. -* **Linux servers**: Five Linux servers, with root access where you can run any applications for your tests. -* **Kubernetes cluster**: A 3 node Kubernetes cluster, user integratable with Netris controller, feel free to deploy any applications for your tests. -* **ISP**: Internet upstream with IRIS ISP, providing the sandbox Internet connectivity with real-world routable public IP addresses. - -.. _s9-topology: - -Topology diagram -================ - -.. image:: /images/sandbox_topology.png - :align: center - :alt: Sandbox Topology - :target: ../../_images/sandbox_topology.png - - - -Netris Controller -================= -https://sandbox9.netris.io - -Linux servers -============= - -Example pre-configured Netris services: - * **srv01-nyc**, **srv02-nyc**, **srv03-nyc** & **Netris Controller** - are consuming :ref:`"ROH (Routing on the Host)"` Netris example service, see **Services → ROH.** - * **srv01-nyc**, **srv02-nyc** - are behind :ref:`"Anycast L3 load balancer"`, see **Services → Load Balancer**. - * **srv04-nyc**, **srv05-nyc** - are consuming :ref:`"V-NET (routed VXLAN)"` Netris service, see **Services → V-NET**. - - -**Accessing the Linux servers:** - -.. code-block:: shell-session - - srv01-nyc: ssh demo@166.88.17.22 -p 30061 - srv02-nyc: ssh demo@166.88.17.22 -p 30062 - srv03-nyc: ssh demo@166.88.17.22 -p 30063 - srv04-nyc: ssh demo@166.88.17.22 -p 30064 - srv05-nyc: ssh demo@166.88.17.22 -p 30065 - - -Kubernetes cluster -================== -This Sandbox provides an up and running 3 node Kubernetes cluster. You can integrate it with the Netris Controller by installing the **netris-operator**. Step-by-step instructions are included in the :ref:`"Learn Netris operations with Kubernetes"` document. - - -Upstream ISP -============ -This Sandbox also provides an upstream ISP service with real-world Internet routing configured through :ref:`"BGP"`. -There are two pre-configured examples under **NET → E-BGP** , one using IPv4 and the other using IPv6, which are advertising the public IP subnets belonging to the sandbox to the upstream ISP IRIS. - -ISP settings: - -.. code-block:: shell-session - - (pre-configured examples) - Name: iris-isp1-ipv4-example - BGP Router: Softage1 - Switch Port: swp16@sw01-nyc - Neighbor AS: 65007 - VLAN ID: 1091 - Local Address: 50.117.59.114/30 - Remote Address: 50.117.59.113/30 - Prefix List Inbound: permit 0.0.0.0/0 - Prefix List Outbound: permit 50.117.59.192/28 le 32 - - Name: iris-isp1-ipv6-example - BGP Router: Softage1 - Switch Port: swp16@sw01-nyc - Neighbor AS: 65007 - VLAN ID: 1091 - Local Address: 2607:f358:11:ffc0::13/127 - Remote Address: 2607:f358:11:ffc0::12/127 - Prefix List Inbound: permit ::/0 - Prefix List Outbound: permit 2607:f358:11:ffc9::/64 - - (configurable by you) - BGP Router: Softage2 - Switch Port: swp16@sw02-nyc - Neighbor AS: 65007 - VLAN ID: 1092 - Local Address: 50.117.59.118/30 - Remote Address: 50.117.59.117/30 - Prefix List Inbound: permit 0.0.0.0/0 - Prefix List Outbound: permit 50.117.59.192/28 le 32 - - -Networks Used -============= -Allocations and subnets defined under :ref:`"IPAM"`, see **Net → IPAM**. - -.. code-block:: shell-session - - | MANAGEMENT Allocation: 10.254.45.0/24 - |___ MANAGEMENT Subnet: 10.254.45.0/24 - - | LOOPBACK Allocation: 10.254.46.0/24 - |___ LOOPBACK Subnet: 10.254.46.0/24 - - | ROH Allocation: 192.168.44.0/24 - |___ ROH Subnet: 192.168.44.0/24 - - | EXAMPLE Allocation: 192.168.45.0/24 - |___ EXAMPLE Subnet: 192.168.45.0/24 - - | CUSTOMER Allocation: 192.168.46.0/24 - |___ CUSTOMER Subnet: 192.168.46.0/24 - - | K8s Allocation: 192.168.110.0/24 - |___ K8s Subnet: 192.168.110.0/24 - - | PUBLIC IPv4 Allocation: 50.117.59.192/28 - |___ PUBLIC LOOPBACK Subnet: 50.117.59.192/30 - |___ NAT Subnet: 50.117.59.196/30 - |___ L3 LOAD BALANCER Subnet: 50.117.59.200/30 - |___ L4 LOAD BALANCER Subnet: 50.117.59.204/30 - - | EXAMPLE IPv6 Allocation: 2607:f358:11:ffc9::/64 - |___ EXAMPLE IPv6 Subnet: 2607:f358:11:ffc9::/64 - +************************* +Welcome to Netris Sandbox +************************* + +.. contents:: + :local: + +Netris Sandbox is a ready-to-use environment for testing Netris automatic NetOps. +We have pre-created some example services for you, details of which can be found in the :ref:`"Provided Example Configurations"` document. Feel free to view, edit, delete, and create new services. In case of any questions, reach out to us on `Slack `__. + +The credentials for the sandbox have been provided to you via email in response to your Sandbox request. + +The Sandbox environment includes: + + +* :ref:`Netris Controller`: A cloud-hosted Netris Controller, loaded with examples. +* :ref:`Switching fabric`: Two spine switches and four leaf switches, all operated by Netris. +* :ref:`SoftGates`: Two SoftGate gateway nodes for border routing, L4 Load Balancing, site-to-site VPN, and NAT. Both operated by Netris. +* **Linux servers**: Five Linux servers, with root access where you can run any applications for your tests. +* **Kubernetes cluster**: A 3 node Kubernetes cluster, user integratable with Netris controller, feel free to deploy any applications for your tests. +* **ISP**: Internet upstream with IRIS ISP, providing the sandbox Internet connectivity with real-world routable public IP addresses. + +.. _s9-topology: + +Topology diagram +================ + +.. image:: /images/sandbox_topology_new.png + :align: center + :alt: Sandbox Topology + :target: ../../_images/sandbox_topology_new.png + + +Netris Controller +================= +https://sandbox9.netris.io + +Linux servers +============= + +Example pre-configured Netris services: + * **srv01-nyc**, **srv02-nyc**, **srv03-nyc** & **Netris Controller** - are consuming :ref:`"ROH (Routing on the Host)"` Netris example service, see **Services → ROH.**. + * **srv01-nyc**, **srv02-nyc** - can be configured with :ref:`"L3 Load Balancer (Anycast LB)"`, see **Services → L3 Load Balancer**. + * **srv04-nyc**, **srv05-nyc**, **srv06-nyc**, **srv07-nyc** & **srv08-nyc** - are consuming :ref:`"V-Net (routed VXLAN)"` Netris service, see **Services → V-Net**. + * **srv06-nyc**, **srv07-nyc**, **srv08-nyc** - are members of a 3 node Kubernetes cluser, and the K8s API servers are behind :ref:`"L4 Load Balancer (L4LB)"`, see **Services → L4 Load Balancer**. + + +**Accessing the Linux servers:** + +.. code-block:: shell-session + + srv01-nyc: ssh demo@166.88.17.22 -p 30061 + srv02-nyc: ssh demo@166.88.17.22 -p 30062 + srv03-nyc: ssh demo@166.88.17.22 -p 30063 + srv04-nyc: ssh demo@166.88.17.22 -p 30064 + srv05-nyc: ssh demo@166.88.17.22 -p 30065 + + +Kubernetes cluster +================== +This Sandbox provides an up and running 3 node Kubernetes cluster. You can integrate it with the Netris Controller by installing the **netris-operator**. Step-by-step instructions are included in the :ref:`"Learn Netris operations with Kubernetes"` document. + + +Upstream ISP +============ +This Sandbox also provides an upstream ISP service with real-world Internet routing configured through :ref:`"BGP"`. +There are two pre-configured examples under **Network → E-BGP** , one using IPv4 and the other using IPv6, which are advertising the public IP subnets belonging to the Sandbox to the upstream ISP IRIS. + +ISP settings: + +.. code-block:: shell-session + + (pre-configured examples) + Name: iris-isp1-ipv4-example + BGP Router: Softage1 + Switch Port: swp16@sw01-nyc + Neighbor AS: 65007 + VLAN ID: 1091 + Local Address: 50.117.59.114/30 + Remote Address: 50.117.59.113/30 + Prefix List Inbound: permit 0.0.0.0/0 + Prefix List Outbound: permit 50.117.59.192/28 le 32 + + Name: iris-isp1-ipv6-example + BGP Router: Softage1 + Switch Port: swp16@sw01-nyc + Neighbor AS: 65007 + VLAN ID: 1091 + Local Address: 2607:f358:11:ffc0::13/127 + Remote Address: 2607:f358:11:ffc0::12/127 + Prefix List Inbound: permit ::/0 + Prefix List Outbound: permit 2607:f358:11:ffc9::/64 + + (configurable by you) + BGP Router: Softage2 + Switch Port: swp16@sw02-nyc + Neighbor AS: 65007 + VLAN ID: 1092 + Local Address: 50.117.59.118/30 + Remote Address: 50.117.59.117/30 + Prefix List Inbound: permit 0.0.0.0/0 + Prefix List Outbound: permit 50.117.59.192/28 le 32 + + +Networks Used +============= +Allocations and subnets defined under :ref:`"IPAM"`, see **Network → IPAM**. + +.. code-block:: shell-session + + | MANAGEMENT Allocation: 10.254.45.0/24 + |___ MANAGEMENT Subnet: 10.254.45.0/24 + + | LOOPBACK Allocation: 10.254.46.0/24 + |___ LOOPBACK Subnet: 10.254.46.0/24 + + | ROH Allocation: 192.168.44.0/24 + |___ ROH Subnet: 192.168.44.0/24 + + | EXAMPLE Allocation: 192.168.45.0/24 + |___ EXAMPLE Subnet: 192.168.45.0/24 + + | CUSTOMER Allocation: 192.168.46.0/24 + |___ CUSTOMER Subnet: 192.168.46.0/24 + + | K8s Allocation: 192.168.110.0/24 + |___ K8s Subnet: 192.168.110.0/24 + + | PUBLIC IPv4 Allocation: 50.117.59.192/28 + |___ PUBLIC LOOPBACK Subnet: 50.117.59.192/30 + |___ NAT Subnet: 50.117.59.196/30 + |___ L3 LOAD BALANCER Subnet: 50.117.59.200/30 + |___ L4 LOAD BALANCER Subnet: 50.117.59.204/30 + + | EXAMPLE IPv6 Allocation: 2607:f358:11:ffc9::/64 + |___ EXAMPLE IPv6 Subnet: 2607:f358:11:ffc9::/64 + \ No newline at end of file