Skip to content

Commit

Permalink
Be selective on FIP and port deletion. (#712)
Browse files Browse the repository at this point in the history
* Be selective on FIP and port deletion.
* All ports we create are in 10.1.0/24, so only delete those ports.
* Floating IPs that are attached won't be deleted.
* Comments for the IP range.
   This avoids a developer changing these independently.
* Cosmetic: Fix cleanup.py help. Name subnet in test.
* Reenable link to gxscs health mon.
* Pass --ipaddr filter into cleanup script.
   The cleanup job now takes in filtering args specifically for ports:
   - If a port is connected, it won't be deleted (no change)
   - If the port has a name with matching prefix, it will always be deleted
     (new)
   - If the port has a name but not matching the prefix, it won't be
     (new)
   - If it has no name and no IP filters are passed, we delete it (new)
   - If it has no name and there are IP filters, then we try to match
     them, if one matches, we delete it (enhanced to allow several IPs)
  Use this in pre_cloud.yaml when calling cleanup.py, update comment in
  entropy-check.py accordingly.
* Simplify and fix ipaddr option parsing.
  Thanks, @mbuechse, my brain failed to understand python correctly here.

Signed-off-by: Kurt Garloff <[email protected]>
Co-authored-by: Matthias Büchse <[email protected]>
  • Loading branch information
garloff and mbuechse committed Aug 26, 2024
1 parent 4caa5f9 commit 25e6707
Show file tree
Hide file tree
Showing 4 changed files with 47 additions and 11 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This is a list of clouds that we test on a nightly basis against our `scs-compat

| Name | Description | Operator | _SCS-compatible IaaS_ Compliance | HealthMon |
| -------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- | ----------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------: |
| [gx-scs](https://github.com/SovereignCloudStack/docs/blob/main/community/cloud-resources/plusserver-gx-scs.md) | Dev environment provided for SCS & GAIA-X context | plusserver GmbH | [![Compliance Status](https://img.shields.io/github/actions/workflow/status/SovereignCloudStack/standards/check-gx-scs-v4.yml?label=v4)](https://github.com/SovereignCloudStack/standards/actions/workflows/check-gx-scs-v4.yml) | broken <!--[HM](https://health.gx-scs.sovereignit.cloud:3000/)--> |
| [gx-scs](https://github.com/SovereignCloudStack/docs/blob/main/community/cloud-resources/plusserver-gx-scs.md) | Dev environment provided for SCS & GAIA-X context | plusserver GmbH | [![Compliance Status](https://img.shields.io/github/actions/workflow/status/SovereignCloudStack/standards/check-gx-scs-v4.yml?label=v4)](https://github.com/SovereignCloudStack/standards/actions/workflows/check-gx-scs-v4.yml) | [HM](https://health.gx-scs.sovereignit.cloud:3000/) |
| [pluscloud open](https://www.plusserver.com/en/products/pluscloud-open)<br />- prod1<br />- prod2<br />- prod3<br />- prod4 | Public cloud for customers (4 regions) | plusserver GmbH | &nbsp;<br />- prod1 [![Compliance Status](https://img.shields.io/github/actions/workflow/status/SovereignCloudStack/standards/check-pco-prod1-v4.yml?label=v4)](https://github.com/SovereignCloudStack/standards/actions/workflows/check-pco-prod1-v4.yml)<br />- prod2 [![Compliance Status](https://img.shields.io/github/actions/workflow/status/SovereignCloudStack/standards/check-pco-prod2-v4.yml?label=v4)](https://github.com/SovereignCloudStack/standards/actions/workflows/check-pco-prod2-v4.yml)<br />- prod3 [![Compliance Status](https://img.shields.io/github/actions/workflow/status/SovereignCloudStack/standards/check-pco-prod3-v4.yml?label=v4)](https://github.com/SovereignCloudStack/standards/actions/workflows/check-pco-prod3-v4.yml)<br />- prod4 [![Compliance Status](https://img.shields.io/github/actions/workflow/status/SovereignCloudStack/standards/check-pco-prod4-v4.yml?label=v4)](https://github.com/SovereignCloudStack/standards/actions/workflows/check-pco-prod4-v4.yml) | &nbsp;<br />[HM1](https://health.prod1.plusserver.sovereignit.cloud:3000/d/9ltTEmlnk/openstack-health-monitor2?orgId=1&var-mycloud=plus-pco)<br />[HM2](https://health.prod1.plusserver.sovereignit.cloud:3000/d/9ltTEmlnk/openstack-health-monitor2?orgId=1&var-mycloud=plus-prod2)<br />[HM3](https://health.prod1.plusserver.sovereignit.cloud:3000/d/9ltTEmlnk/openstack-health-monitor2?orgId=1&var-mycloud=plus-prod3)<br />[HM4](https://health.prod1.plusserver.sovereignit.cloud:3000/d/9ltTEmlnk/openstack-health-monitor2?orgId=1&var-mycloud=plus-prod4) |
| [Wavestack](https://www.noris.de/wavestack-cloud/) | Public cloud for customers | noris network AG/Wavecon GmbH | [![Compliance Status](https://img.shields.io/github/actions/workflow/status/SovereignCloudStack/standards/check-wavestack-v4.yml?label=v4)](https://github.com/SovereignCloudStack/standards/actions/workflows/check-wavestack-v4.yml) | [HM](https://health.wavestack1.sovereignit.cloud:3000/) |
| [REGIO.cloud](https://regio.digital) | Public cloud for customers | OSISM GmbH | [![Compliance Status](https://img.shields.io/github/actions/workflow/status/SovereignCloudStack/standards/check-regio-a-v4.yml?label=v4)](https://github.com/SovereignCloudStack/standards/actions/workflows/check-regio-a-v4.yml) | broken <!--[HM](https://apimon.services.regio.digital/public-dashboards/17cf094a47404398a5b8e35a4a3968d4?orgId=1&refresh=5m)--> |
Expand Down
48 changes: 40 additions & 8 deletions Tests/cleanup.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,18 @@ def print_usage(file=sys.stderr):
print("""Usage: cleanup.py [options]
This tool cleans the cloud environment CLOUD by removing any resources whose name start with PREFIX.
Options:
[-c/--os-cloud OS_CLOUD] sets cloud environment (default from OS_CLOUD env)
[-i/--prefix PREFIX] sets prefix (default from PREFIX env)
[-c/--os-cloud OS_CLOUD] sets cloud environment (default from OS_CLOUD env)
[-p/--prefix PREFIX] sets prefix to identify resources (default from PREFIX env)
[-i/--ipaddr addr[,addr]] list of IP addresses to identify ports to delete (def: delete all)
the specified strings will be matched against the start of the addrs
""", end='', file=file)


class Janitor:
def __init__(self, conn, prefix=""):
def __init__(self, conn, prefix="", ipfilter=()):
self.conn = conn
self.prefix = prefix
self.ipaddrs = ipfilter

def disconnect_routers(self):
logger.debug("disconnect routers")
Expand Down Expand Up @@ -75,14 +78,38 @@ def cleanup_subnets(self):
logger.info(subnet.name)
self.conn.network.delete_subnet(subnet)

def port_match(self, port):
"""Determine whether port is to be cleaned up:
- If it is connected to a VM/LB/...: False
- It it has a name that starts with the prefix: True
- If it has a name not matching the prefix filter: False
- If it has no name and we do not have IP range filters: True
- Otherwise see if one of the specified IP ranges matches
"""
if port.device_owner:
return False
if port.name.startswith(self.prefix):
return True
if port.name:
return False
if not self.ipaddrs:
return True
for fixed_addr in port.fixed_ips:
ip_addr = fixed_addr["ip_address"]
for ipmatch in self.ipaddrs:
if ip_addr.startswith(ipmatch):
logger.debug(f"{ip_addr} matches {ipmatch}")
return True
return False

def cleanup_ports(self):
logger.debug("clean up ports")
# FIXME: We can't filter for device_owner = '' unfortunately
ports = list(self.conn.network.ports(status="DOWN"))
for port in ports:
if port.device_owner:
if not self.port_match(port):
continue
logger.info(port.id)
logger.info(f"{port.id}: {port.fixed_ips}")
self.conn.network.delete_port(port)

def cleanup_volumes(self):
Expand Down Expand Up @@ -148,8 +175,10 @@ def cleanup_floating_ips(self):
# Note: FIPs have no name, so we might clean up unrelated
# currently unused FIPs here.
logger.debug("clean up floating ips")
floating_ips = list(self.conn.search_floating_ips(filters={"attached": False}))
floating_ips = list(self.conn.search_floating_ips())
for floating_ip in floating_ips:
if floating_ip["port_id"]:
continue
logger.info(floating_ip.floating_ip_address)
self.conn.delete_floating_ip(floating_ip.id)

Expand All @@ -176,9 +205,10 @@ def main(argv):

prefix = os.environ.get("PREFIX", None)
cloud = os.environ.get("OS_CLOUD")
ipaddrs = []

try:
opts, args = getopt.gnu_getopt(argv, "c:p:h", ["os-cloud=", "prefix=", "help"])
opts, args = getopt.gnu_getopt(argv, "c:p:i:h", ["os-cloud=", "prefix=", "ipaddr=", "help"])
except getopt.GetoptError as exc:
logger.critical(f"{exc}")
print_usage()
Expand All @@ -192,6 +222,8 @@ def main(argv):
prefix = opt[1]
if opt[0] == "-c" or opt[0] == "--os-cloud":
cloud = opt[1]
if opt[0] == "-i" or opt[0] == "--ipaddr":
ipaddrs = opt[1].split(",")

if prefix is None:
# check for None, because supplying --prefix '' shall be permitted
Expand All @@ -203,7 +235,7 @@ def main(argv):
return 1

with openstack.connect(cloud=cloud) as conn:
Janitor(conn, prefix).cleanup()
Janitor(conn, prefix, ipaddrs).cleanup()


if __name__ == "__main__":
Expand Down
6 changes: 5 additions & 1 deletion Tests/iaas/entropy/entropy-check.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
# prefix ephemeral resources with '_scs-' to rule out any confusion with important resources
# (this enables us to automatically dispose of any lingering resources should this script be killed)
NETWORK_NAME = "_scs-0101-net"
SUBNET_NAME = "_scs-0101-subnet"
ROUTER_NAME = "_scs-0101-router"
SERVER_NAME = "_scs-0101-server"
SECURITY_GROUP_NAME = "_scs-0101-group"
Expand Down Expand Up @@ -223,16 +224,19 @@ def prepare(self):

# create network, subnet, router, connect everything
self.network = self.conn.create_network(NETWORK_NAME)
# Note: The IP range/cidr here needs to match the one in the pre_cloud.yaml
# playbook calling cleanup.py
self.subnet = self.conn.create_subnet(
self.network.id,
cidr="10.1.0.0/24",
gateway_ip="10.1.0.1",
enable_dhcp=True,
allocation_pools=[{
"start": "10.1.0.100",
"end": "10.1.0.200",
"end": "10.1.0.199",
}],
dns_nameservers=["9.9.9.9"],
name=SUBNET_NAME,
)
external_networks = list(self.conn.network.networks(is_router_external=True))
if not external_networks:
Expand Down
2 changes: 1 addition & 1 deletion playbooks/pre_cloud.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,5 +27,5 @@
mode: "0600"

- name: Clean up any lingering resources from previous run
ansible.builtin.shell: python3 ~/Tests/cleanup.py -c {{ cloud }} --prefix _scs-
ansible.builtin.shell: python3 ~/Tests/cleanup.py -c {{ cloud }} --prefix _scs- --ipaddr 10.1.0.
changed_when: true

0 comments on commit 25e6707

Please sign in to comment.