From 2301a8d16597f2dd3291dbad178067235dc44371 Mon Sep 17 00:00:00 2001 From: Date: Wed, 6 Dec 2023 15:09:16 +0100 Subject: [PATCH] Deployed 8fb35ab with MkDocs version: 1.5.3 --- index.html | 2 +- search/search_index.json | 2 +- sitemap.xml.gz | Bin 127 -> 127 bytes 3 files changed, 2 insertions(+), 2 deletions(-) diff --git a/index.html b/index.html index c2cb09d..71e5db5 100644 --- a/index.html +++ b/index.html @@ -453,7 +453,7 @@

Installing Auto-Verify

Anaconda can fail trying to install environments in some cases where Miniconda does not.

After Miniconda is installed, setup Auto-Verify by running the following commands:

-
> conda create -n python=3.10 auto-verify
+
> conda create -n auto-verify python=3.10
 > conda activate auto-verify
 > pip install auto-verify
 
diff --git a/search/search_index.json b/search/search_index.json index 9f4afc2..50c017f 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Auto-Verify","text":"

Auto-Verify is a tool that provides interfaces, parameter spaces and installation managers for different neural network verification tools.

"},{"location":"#getting-started","title":"Getting Started","text":"

Warning

Auto-Verify has only been tested for Linux and will not work on MacOS and Windows.

"},{"location":"#installing-auto-verify","title":"Installing Auto-Verify","text":"

First, install Miniconda. Miniconda is used to manage the environments of different verification tools, other environment managers will not work.

Warning

Anaconda can fail trying to install environments in some cases where Miniconda does not.

After Miniconda is installed, setup Auto-Verify by running the following commands:

> conda create -n python=3.10 auto-verify\n> conda activate auto-verify\n> pip install auto-verify\n

To check if the installation was succesful, run:

> auto-verify --version\n
"},{"location":"#installing-verification-tools","title":"Installing Verification Tools","text":"

Currently, Auto-Verify supports the following verifiers:

  • nnenum (Stanley Bak)
  • AB-Crown (Zhang et al)
  • VeriNet (VAS Group)
  • Oval-BaB (OVAL Research Group)

These verifiers can be installed as follows:

> auto-verify install nnenum\n> auto-verify install abcrown\n> auto-verify install verinet\n> auto-verify install ovalbab\n

To uninstall a verifier, run:

> auto-verify uninstall [verifier]\n
"},{"location":"api/","title":"API","text":""},{"location":"api/#verifier","title":"Verifier","text":"

Base verifier class.

Classes for data about verification.

"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier","title":"CompleteVerifier","text":"

Bases: Verifier

Abstract class for complete verifiers.

Source code in autoverify/verifier/verifier.py
class CompleteVerifier(Verifier):\n\"\"\"Abstract class for complete verifiers.\"\"\"\ndef verify_property(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Configuration | Path | None = None,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> CompleteVerificationResult:\n\"\"\"Verify the property on the network.\n        Runs the verifier and feeds the network/property instance as input.\n        Complete verification will result in one of the following three\n        possibilities: `SAT`, `UNSAT`, `TIMEOUT`.\n        Args:\n            network: The `Path` to the network in `.onnx` format.\n            property: The `Path` to the property in `.vnnlib` format.\n            config: The configuration of the verification tool to be used. If\n                `None` is passed, the default configuration of the verification\n                tool will be used.\n            timeout: The maximum number of seconds that can be spent on the\n                verification query.\n        Returns:\n            CompleteVerificationResult: A `Result` object containing information\n                about the verification attempt. TODO: Link docs or something\n        \"\"\"\nnetwork, property = network.resolve(), property.resolve()\nself._check_instance(network, property)\nif config is None:\nconfig = self.default_config\n# Tools use different configuration formats and methods, so we let\n# them do some initialization here\nconfig = self._init_config(network, property, config)\nrun_cmd, output_file = self._get_run_cmd(\nnetwork, property, config=config, timeout=timeout\n)\noutcome = self._run_verification(\nrun_cmd,\nresult_file=output_file,\ntimeout=timeout,\n)\n# Shutting down after timeout may take some time, so we set the took\n# value to the actual timeout\nif outcome.result == \"TIMEOUT\":\noutcome.took = timeout\n# TODO: What is the point of wrapping in Ok/Err here\nreturn Ok(outcome) if outcome.result != \"ERR\" else Err(outcome)\ndef verify_instance(\nself,\ninstance: VerificationInstance,\n*,\nconfig: Configuration | Path | None = None,\n) -> CompleteVerificationResult:\n\"\"\"See the `verify_property` docstring.\"\"\"\nreturn self.verify_property(\ninstance.network,\ninstance.property,\ntimeout=instance.timeout,\nconfig=config,\n)\ndef verify_batch(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nconfig: Configuration | Path | None = None,\n) -> list[CompleteVerificationResult]:\n\"\"\"Verify a batch. Not yet implemented.\"\"\"\nfor instance in instances:\nself._check_instance(instance.network, instance.property)\nif config is None:\nconfig = self.default_config\nreturn self._verify_batch(\ninstances,\nconfig=config,\n)\n@abstractmethod\ndef _verify_batch(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\nraise NotImplementedError\ndef _allocate_run_cmd(\nself,\nrun_cmd: str,\ncontexts: list[ContextManager[None]],\n) -> str:\n# TODO: GPU allocation\nassert self._cpu_gpu_allocation is not None\ntaskset_cmd = taskset_cpu_range(self._cpu_gpu_allocation[0:2])\nlines = []\ngpu_dev = self._cpu_gpu_allocation[2]\ngpus = nvidia_gpu_count()\nif gpu_dev > gpus - 1:\nraise ValueError(\nf\"Asked for GPU {gpu_dev} (0-indexed), \"\nf\"but only found {gpus} GPU(s)\"\n)\nif gpu_dev >= 0:\ncontexts.append(environment(CUDA_VISIBLE_DEVICES=str(gpu_dev)))\nfor line in run_cmd.splitlines():\nline = line.lstrip()\nif len(line) == 0 or line.isspace():\ncontinue\n# HACK: Why does taskset not work with `source` and `conda`?\nif line.startswith(\"source\") or line.startswith(\"conda\"):\nlines.append(line)\nelse:\nlines.append(taskset_cmd + \" \" + line)\nreturn \"\\n\".join(lines)\ndef set_timeout_event(self):\n\"\"\"Signal that the process has timed out.\"\"\"\ntry:\nself._timeout_event.set()  # type: ignore\nexcept AttributeError:\npass\ndef _run_verification(\nself,\nrun_cmd: str,\n*,\nresult_file: Path | None = None,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> CompleteVerificationData:\ncontexts = self.contexts or []\noutput_lines: list[str] = []\nresult: str = \"\"\nif self._cpu_gpu_allocation is not None:\nrun_cmd = self._allocate_run_cmd(run_cmd, contexts)\nwith ExitStack() as stack:\nfor context in contexts:\nstack.enter_context(context)\nprocess = subprocess.Popen(\nrun_cmd,\nexecutable=\"/bin/bash\",\nstdout=subprocess.PIPE,\nstderr=subprocess.STDOUT,\nshell=True,\nuniversal_newlines=True,\npreexec_fn=os.setsid,\n)\nbefore_t = time.time()\nself._timeout_event: threading.Event | None = threading.Event()\ndef _terminate(timeout_sec):\nassert self._timeout_event\non_time = self._timeout_event.wait(timeout_sec)\nif not on_time:\nglobal result\nresult = \"TIMEOUT\"  # type: ignore\nif pid_exists(process.pid):\nos.killpg(os.getpgid(process.pid), signal.SIGTERM)\nt = threading.Thread(target=_terminate, args=[timeout])\nt.start()\nassert process.stdout\nfor line in iter(process.stdout.readline, \"\"):\noutput_lines.append(line)\nprocess.stdout.close()\nreturn_code = process.wait()\ntook_t = time.time() - before_t\nself._timeout_event.set()\noutput_str = \"\".join(output_lines)\ncounter_example: str | None = None\nif result != \"TIMEOUT\":\nif return_code > 0:\nresult = \"ERR\"\nelse:\nresult, counter_example = self._parse_result(\noutput_str, result_file\n)\nself._timeout_event = None\nreturn CompleteVerificationData(\nresult,  # type: ignore\ntook_t,\ncounter_example,\n\"\",  # TODO: Remove err field; its piped it to stdout\noutput_str,\n)\nraise RuntimeError(\"Exception during handling of verification\")\n
"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier.set_timeout_event","title":"set_timeout_event()","text":"

Signal that the process has timed out.

Source code in autoverify/verifier/verifier.py
def set_timeout_event(self):\n\"\"\"Signal that the process has timed out.\"\"\"\ntry:\nself._timeout_event.set()  # type: ignore\nexcept AttributeError:\npass\n
"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier.verify_batch","title":"verify_batch(instances, *, config=None)","text":"

Verify a batch. Not yet implemented.

Source code in autoverify/verifier/verifier.py
def verify_batch(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nconfig: Configuration | Path | None = None,\n) -> list[CompleteVerificationResult]:\n\"\"\"Verify a batch. Not yet implemented.\"\"\"\nfor instance in instances:\nself._check_instance(instance.network, instance.property)\nif config is None:\nconfig = self.default_config\nreturn self._verify_batch(\ninstances,\nconfig=config,\n)\n
"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier.verify_instance","title":"verify_instance(instance, *, config=None)","text":"

See the verify_property docstring.

Source code in autoverify/verifier/verifier.py
def verify_instance(\nself,\ninstance: VerificationInstance,\n*,\nconfig: Configuration | Path | None = None,\n) -> CompleteVerificationResult:\n\"\"\"See the `verify_property` docstring.\"\"\"\nreturn self.verify_property(\ninstance.network,\ninstance.property,\ntimeout=instance.timeout,\nconfig=config,\n)\n
"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier.verify_property","title":"verify_property(network, property, *, config=None, timeout=DEFAULT_VERIFICATION_TIMEOUT_SEC)","text":"

Verify the property on the network.

Runs the verifier and feeds the network/property instance as input. Complete verification will result in one of the following three possibilities: SAT, UNSAT, TIMEOUT.

Parameters:

Name Type Description Default network Path

The Path to the network in .onnx format.

required property Path

The Path to the property in .vnnlib format.

required config Configuration | Path | None

The configuration of the verification tool to be used. If None is passed, the default configuration of the verification tool will be used.

None timeout int

The maximum number of seconds that can be spent on the verification query.

DEFAULT_VERIFICATION_TIMEOUT_SEC

Returns:

Name Type Description CompleteVerificationResult CompleteVerificationResult

A Result object containing information about the verification attempt. TODO: Link docs or something

Source code in autoverify/verifier/verifier.py
def verify_property(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Configuration | Path | None = None,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> CompleteVerificationResult:\n\"\"\"Verify the property on the network.\n    Runs the verifier and feeds the network/property instance as input.\n    Complete verification will result in one of the following three\n    possibilities: `SAT`, `UNSAT`, `TIMEOUT`.\n    Args:\n        network: The `Path` to the network in `.onnx` format.\n        property: The `Path` to the property in `.vnnlib` format.\n        config: The configuration of the verification tool to be used. If\n            `None` is passed, the default configuration of the verification\n            tool will be used.\n        timeout: The maximum number of seconds that can be spent on the\n            verification query.\n    Returns:\n        CompleteVerificationResult: A `Result` object containing information\n            about the verification attempt. TODO: Link docs or something\n    \"\"\"\nnetwork, property = network.resolve(), property.resolve()\nself._check_instance(network, property)\nif config is None:\nconfig = self.default_config\n# Tools use different configuration formats and methods, so we let\n# them do some initialization here\nconfig = self._init_config(network, property, config)\nrun_cmd, output_file = self._get_run_cmd(\nnetwork, property, config=config, timeout=timeout\n)\noutcome = self._run_verification(\nrun_cmd,\nresult_file=output_file,\ntimeout=timeout,\n)\n# Shutting down after timeout may take some time, so we set the took\n# value to the actual timeout\nif outcome.result == \"TIMEOUT\":\noutcome.took = timeout\n# TODO: What is the point of wrapping in Ok/Err here\nreturn Ok(outcome) if outcome.result != \"ERR\" else Err(outcome)\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier","title":"Verifier","text":"

Bases: ABC

Abstract class to represent a verifier tool.

Source code in autoverify/verifier/verifier.py
class Verifier(ABC):\n\"\"\"Abstract class to represent a verifier tool.\"\"\"\n# TODO: GPU Mode attribute\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\n):\n\"\"\"New instance. This is used with super calls.\"\"\"\nself._batch_size = batch_size\nself._cpu_gpu_allocation = cpu_gpu_allocation\ndef get_init_attributes(self) -> dict[str, Any]:\n\"\"\"Get attributes provided during initialization of the verifier.\"\"\"\nreturn {\n\"batch_size\": self._batch_size,\n}\n@property\n@abstractmethod\ndef name(self) -> str:\n\"\"\"Unique verifier name.\"\"\"\nraise NotImplementedError\n@property\n@abstractmethod\ndef config_space(self) -> ConfigurationSpace:\n\"\"\"Configuration space to sample from.\"\"\"\nraise NotImplementedError\n@property\n@abstractmethod\ndef contexts(self) -> list[ContextManager[None]] | None:\n\"\"\"Contexts to run the verification in.\"\"\"\nraise NotImplementedError\n@property\ndef tool_path(self) -> Path:\n\"\"\"The path where the verifier is installed.\"\"\"\ntool_path = VERIFIER_DIR / self.name / TOOL_DIR_NAME\nif not tool_path.exists():\nraise FileNotFoundError(\nf\"Could not find installation for tool {self.name}\"\n)\nreturn Path(tool_path)  # mypy complains tool_path is any\n@property\ndef conda_env_name(self) -> str:\n\"\"\"The conda environment name associated with the verifier.\"\"\"\nreturn get_verifier_conda_env_name(self.name)\n@property\ndef conda_lib_path(self) -> Path:\nreturn get_conda_env_lib_path(self.conda_env_name)\n@property\ndef default_config(self) -> Configuration:\n\"\"\"Return the default configuration.\"\"\"\nreturn self.config_space.get_default_configuration()\n@abstractmethod\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Any,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\n\"\"\"Get the cli command to run the verification.\"\"\"\nraise NotImplementedError\n@abstractmethod\ndef _parse_result(\nself,\noutput: str,\nresult_file: Path | None,\n) -> tuple[VerificationResultString, str | None]:\n\"\"\"Parse the output to get the result.\"\"\"\nraise NotImplementedError\ndef _init_config(\nself,\nnetwork: Path,\nproperty: Path,\nconfig: Configuration | Path,\n) -> Any:\n\"\"\"Init the config, return type that is needed.\"\"\"\nreturn config\n# TODO: Overload like in ConfigSpace to distinguish between return types\ndef sample_configuration(\nself, *, size: int = 1\n) -> Configuration | list[Configuration]:\n\"\"\"Sample one or more configurations.\n        Args:\n            size: The number of configurations to sample.\n        Returns:\n            Configuration | list[Configuration]: The sampled configuration(s).\n        \"\"\"\nreturn self.config_space.sample_configuration(size=size)\n@staticmethod\ndef is_same_config(config1: Any, config2: Any) -> bool:\n\"\"\"Check if two configs are the same.\"\"\"\nraise NotImplementedError\n# TODO: Make this a function in util/\n@staticmethod\ndef _check_instance(network: Path, property: Path):\nif not check_file_extension(network, \".onnx\"):\nraise ValueError(\"Network should be in onnx format\")\nif not check_file_extension(property, \".vnnlib\"):\nraise ValueError(\"Property should be in vnnlib format\")\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier.conda_env_name","title":"conda_env_name: str property","text":"

The conda environment name associated with the verifier.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.config_space","title":"config_space: ConfigurationSpace abstractmethod property","text":"

Configuration space to sample from.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.contexts","title":"contexts: list[ContextManager[None]] | None abstractmethod property","text":"

Contexts to run the verification in.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.default_config","title":"default_config: Configuration property","text":"

Return the default configuration.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.name","title":"name: str abstractmethod property","text":"

Unique verifier name.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.tool_path","title":"tool_path: Path property","text":"

The path where the verifier is installed.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None)","text":"

New instance. This is used with super calls.

Source code in autoverify/verifier/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\n):\n\"\"\"New instance. This is used with super calls.\"\"\"\nself._batch_size = batch_size\nself._cpu_gpu_allocation = cpu_gpu_allocation\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier.get_init_attributes","title":"get_init_attributes()","text":"

Get attributes provided during initialization of the verifier.

Source code in autoverify/verifier/verifier.py
def get_init_attributes(self) -> dict[str, Any]:\n\"\"\"Get attributes provided during initialization of the verifier.\"\"\"\nreturn {\n\"batch_size\": self._batch_size,\n}\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier.is_same_config","title":"is_same_config(config1, config2) staticmethod","text":"

Check if two configs are the same.

Source code in autoverify/verifier/verifier.py
@staticmethod\ndef is_same_config(config1: Any, config2: Any) -> bool:\n\"\"\"Check if two configs are the same.\"\"\"\nraise NotImplementedError\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier.sample_configuration","title":"sample_configuration(*, size=1)","text":"

Sample one or more configurations.

Parameters:

Name Type Description Default size int

The number of configurations to sample.

1

Returns:

Type Description Configuration | list[Configuration]

Configuration | list[Configuration]: The sampled configuration(s).

Source code in autoverify/verifier/verifier.py
def sample_configuration(\nself, *, size: int = 1\n) -> Configuration | list[Configuration]:\n\"\"\"Sample one or more configurations.\n    Args:\n        size: The number of configurations to sample.\n    Returns:\n        Configuration | list[Configuration]: The sampled configuration(s).\n    \"\"\"\nreturn self.config_space.sample_configuration(size=size)\n
"},{"location":"api/#autoverify.verifier.verification_result.CompleteVerificationData","title":"CompleteVerificationData dataclass","text":"

Class holding data about a verification run.

Attributes:

Name Type Description result VerificationResultString

Outcome (e.g. SAT, UNSAT...)

took float

Walltime spent

counter_example str | None

Example that violates property (if SAT)

err str

stderr

stdout str

stdout

Source code in autoverify/verifier/verification_result.py
@dataclass\nclass CompleteVerificationData:\n\"\"\"Class holding data about a verification run.\n    Attributes:\n        result: Outcome (e.g. SAT, UNSAT...)\n        took: Walltime spent\n        counter_example: Example that violates property (if SAT)\n        err: stderr\n        stdout: stdout\n    \"\"\"\nresult: VerificationResultString\ntook: float\ncounter_example: str | None = None\nerr: str = \"\"\nstdout: str = \"\"\n
"},{"location":"api/#ab-crown","title":"AB-Crown","text":""},{"location":"api/#autoverify.verifier.complete.abcrown.AbCrown","title":"AbCrown","text":"

Bases: CompleteVerifier

AB-Crown.

Source code in autoverify/verifier/complete/abcrown/verifier.py
class AbCrown(CompleteVerifier):\n\"\"\"AB-Crown.\"\"\"\nname: str = \"abcrown\"\nconfig_space: ConfigurationSpace = AbCrownConfigspace\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\nyaml_override: dict[str, Any] | None = None,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"AB-Crown CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._yaml_override = yaml_override\n@property\ndef contexts(self) -> list[ContextManager[None]]:\n# TODO: Narrow the pkill_match_list patterns further. People may be\n# running scripts called 'abcrown.py'\n# Ideally just keep track of PIDs rather than pkill name matching\nreturn [\ncwd(self.tool_path / \"complete_verifier\"),\npkill_matches([\"python abcrown.py\"]),\nenvironment(\nLD_LIBRARY_PATH=str(\nfind_conda_lib(self.conda_env_name, \"libcudart.so.11.0\")\n)\n),\n]\ndef _parse_result(\nself,\noutput: str,\nresult_file: Path | None,\n) -> tuple[VerificationResultString, str | None]:\nif find_substring(\"Result: sat\", output):\nwith open(str(result_file), \"r\") as f:\ncounter_example = f.read()\nreturn \"SAT\", counter_example\nelif find_substring(\"Result: unsat\", output):\nreturn \"UNSAT\", None\nelif find_substring(\"Result: timeout\", output):\nreturn \"TIMEOUT\", None\nreturn \"TIMEOUT\", None\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Path,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\nresult_file = Path(tmp_file(\".txt\").name)\nsource_cmd = get_conda_source_cmd(get_conda_path())\nrun_cmd = f\"\"\"\n{\" \".join(source_cmd)}\n        conda activate {self.conda_env_name}\n        python abcrown.py --config {str(config)} \\\n        --results_file {str(result_file)} \\\n        --timeout {str(timeout)}\n        \"\"\"\nreturn run_cmd, result_file\ndef _verify_batch(\nself,\ninstances: Iterable[Any],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\n# source_cmd = get_conda_source_cmd()\n# TODO:\nraise NotImplementedError(\"Batch verification not supported yet\")\ndef _init_config(\nself,\nnetwork: Path,\nproperty: Path,\nconfig: Configuration | Path,\n) -> Path:\nif isinstance(config, Configuration):\nyaml_config = AbcrownYamlConfig.from_config(\nconfig,\nnetwork,\nproperty,\nbatch_size=self._batch_size,\nyaml_override=self._yaml_override,\n)\nelif isinstance(config, Path):\nyaml_config = AbcrownYamlConfig.from_yaml(\nconfig,\nnetwork,\nproperty,\nbatch_size=self._batch_size,\nyaml_override=self._yaml_override,\n)\nelse:\nraise ValueError(\"Config should be a Configuration or Path\")\nreturn Path(yaml_config.get_yaml_file_path())\n
"},{"location":"api/#autoverify.verifier.complete.abcrown.AbCrown.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None, yaml_override=None)","text":"

New instance.

Source code in autoverify/verifier/complete/abcrown/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\nyaml_override: dict[str, Any] | None = None,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"AB-Crown CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._yaml_override = yaml_override\n
"},{"location":"api/#nnenum","title":"nnenum","text":""},{"location":"api/#autoverify.verifier.complete.nnenum.Nnenum","title":"Nnenum","text":"

Bases: CompleteVerifier

Nnenum.

Source code in autoverify/verifier/complete/nnenum/verifier.py
class Nnenum(CompleteVerifier):\n\"\"\"Nnenum.\"\"\"\nname: str = \"nnenum\"\nconfig_space: ConfigurationSpace = NnenumConfigspace\n# HACK: Should not need to instantiate a whole new instance just to\n# change `_use_auto_settings`.\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\nuse_auto_settings: bool = True,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] >= 0:\nraise ValueError(\"Nnenum does not use a GPU, please set it to -1.\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._use_auto_settings = use_auto_settings\n@property\ndef contexts(self) -> list[ContextManager[None]]:\nreturn [\ncwd(self.tool_path / \"src\"),\nenvironment(OPENBLAS_NUM_THREADS=\"1\", OMP_NUM_THREADS=\"1\"),\npkill_matches([\"python -m nnenum.nnenum\"]),\n]\ndef _parse_result(\nself, _: str, result_file: Path | None\n) -> tuple[VerificationResultString, str | None]:\nwith open(str(result_file), \"r\") as f:\nresult_txt = f.read()\nfirst_line = result_txt.split(\"\\n\", maxsplit=1)[0]\nif first_line == \"unsat\":\nreturn \"UNSAT\", None\nelif first_line == \"sat\":\ncounter_example = self._parse_counter_example(result_txt)\nreturn \"SAT\", counter_example\nelif first_line == \"timeout\":\nreturn \"TIMEOUT\", None\nreturn \"TIMEOUT\", None\ndef _parse_counter_example(self, result_txt: str) -> str:\nreturn result_txt.split(\"\\n\", maxsplit=1)[1]\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: dict[str, Any],\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\nresult_file = Path(tmp_file(\".txt\").name)\nsource_cmd = get_conda_source_cmd(get_conda_path())\n# In nnenum, settings are normally passed as a one word string\n# over the CLI. This word then selects from some pre-defined settings.\n# We want some more control however, so we also make an option to pass\n# a stringified dict of exact settings.\n# The \"none\" value for settings_str is used as a flag that makes\n# nnenum use the dict of exact settings instead.\nsettings_str = \"none\"\nif self._use_auto_settings:\nsettings_str = \"auto\"  # \"auto\" is the default\nconfig = {}\nrun_cmd = f\"\"\"\n{\" \".join(source_cmd)}\n        conda activate {self.conda_env_name}\n        python -m nnenum.nnenum {str(network)} {str(property)} {str(timeout)} \\\n{str(result_file)} \\\n{str(cpu_count())} \\\n{settings_str} \\\n{shlex.quote(str(config))} \\\n        \"\"\"\nreturn run_cmd, result_file\ndef _verify_batch(\nself,\ninstances: Iterable[Any],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\n# source_cmd = get_conda_source_cmd()\n# TODO:\nraise NotImplementedError(\"Batch verification not supported yet\")\ndef _init_config(\nself,\nnetwork: Path,\nproperty: Path,\nconfig: Configuration | Path,\n) -> dict[str, Any]:\nif isinstance(config, Path):\nraise ValueError(\"Configuration files for nnenum not supported yet\")\nimport copy\ndict_config = copy.deepcopy(dict(config))\n# HACK: Cant safely evaluate `np.inf`, instead we pass it as a string\n# that is converted back to `np.inf` in the nnenum code.\nif dict_config[\"INF_OVERAPPROX_MIN_GEN_LIMIT\"] is True:\ndict_config[\"OVERAPPROX_MIN_GEN_LIMIT\"] = \"_inf\"\nif dict_config[\"INF_OVERAPPROX_LP_TIMEOUT\"] is True:\ndict_config[\"OVERAPPROX_LP_TIMEOUT\"] = \"_inf\"\ndel dict_config[\"INF_OVERAPPROX_LP_TIMEOUT\"]\ndel dict_config[\"INF_OVERAPPROX_MIN_GEN_LIMIT\"]\nreturn dict_config\n@staticmethod\ndef is_same_config(\nconfig1: Configuration | str, config2: Configuration | str\n) -> bool:\n\"\"\"Return if two configs are equivalent.\"\"\"\nif isinstance(config1, Configuration):\nconfig1 = str(config1[\"settings_mode\"])  # type: ignore\nif isinstance(config2, Configuration):\nconfig2 = str(config2[\"settings_mode\"])  # type: ignore\nreturn config1 == config2\n
"},{"location":"api/#autoverify.verifier.complete.nnenum.Nnenum.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None, use_auto_settings=True)","text":"

New instance.

Source code in autoverify/verifier/complete/nnenum/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\nuse_auto_settings: bool = True,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] >= 0:\nraise ValueError(\"Nnenum does not use a GPU, please set it to -1.\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._use_auto_settings = use_auto_settings\n
"},{"location":"api/#autoverify.verifier.complete.nnenum.Nnenum.is_same_config","title":"is_same_config(config1, config2) staticmethod","text":"

Return if two configs are equivalent.

Source code in autoverify/verifier/complete/nnenum/verifier.py
@staticmethod\ndef is_same_config(\nconfig1: Configuration | str, config2: Configuration | str\n) -> bool:\n\"\"\"Return if two configs are equivalent.\"\"\"\nif isinstance(config1, Configuration):\nconfig1 = str(config1[\"settings_mode\"])  # type: ignore\nif isinstance(config2, Configuration):\nconfig2 = str(config2[\"settings_mode\"])  # type: ignore\nreturn config1 == config2\n
"},{"location":"api/#oval-bab","title":"Oval-BaB","text":""},{"location":"api/#autoverify.verifier.complete.ovalbab.OvalBab","title":"OvalBab","text":"

Bases: CompleteVerifier

Oval-BaB.

Source code in autoverify/verifier/complete/ovalbab/verifier.py
class OvalBab(CompleteVerifier):\n\"\"\"Oval-BaB.\"\"\"\nname: str = \"ovalbab\"\nconfig_space: ConfigurationSpace = OvalBabConfigspace\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"Oval-BaB CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\n@property\ndef contexts(self) -> list[ContextManager[None]]:\nreturn [\ncwd(self.tool_path),\nenvironment(\nLD_LIBRARY_PATH=str(\nfind_conda_lib(self.conda_env_name, \"libcudart.so.11.0\")\n)\n),\npkill_matches([\"python tools/bab_tools/bab_from_vnnlib.py\"]),\n]\ndef _parse_result(\nself,\n_: str,\nresult_file: Path | None,\n) -> tuple[VerificationResultString, str | None]:\nwith open(str(result_file), \"r\") as f:\nresult_text = f.read()\nif find_substring(\"violated\", result_text):\n# TODO: Counterexample (not sure if its saved at all by ovalbab?)\nreturn \"SAT\", None\nelif find_substring(\"holds\", result_text):\nreturn \"UNSAT\", None\nreturn \"TIMEOUT\", None\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Path,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\nresult_file = Path(tmp_file(\".txt\").name)\nsource_cmd = get_conda_source_cmd(get_conda_path())\nrun_cmd = f\"\"\"\n{\" \".join(source_cmd)}\n        conda activate {self.conda_env_name}\n        python tools/bab_tools/bab_from_vnnlib.py --mode run_instance \\\n        --onnx {str(network)} \\\n        --vnnlib {str(property)} \\\n        --result_file {str(result_file)} \\\n        --json {str(config)} \\\n        --instance_timeout {timeout}\n        \"\"\"\nreturn run_cmd, result_file\ndef _init_config(\nself,\nnetwork: Path,\nproperty: Path,\nconfig: Configuration | Path,\n) -> Path:\nif isinstance(config, Configuration):\njson_config = OvalbabJsonConfig.from_config(config)\nelif isinstance(config, Path):\njson_config = OvalbabJsonConfig.from_json(config)\nelse:\nraise ValueError(\"Config should be a Configuration, Path or None\")\nreturn Path(json_config.get_json_file_path())\ndef _verify_batch(\nself,\ninstances: Iterable[Any],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\n# source_cmd = get_conda_source_cmd()\n# TODO:\nraise NotImplementedError(\"Batch verification not supported yet\")\n
"},{"location":"api/#autoverify.verifier.complete.ovalbab.OvalBab.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None)","text":"

New instance.

Source code in autoverify/verifier/complete/ovalbab/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"Oval-BaB CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\n
"},{"location":"api/#verinet","title":"VeriNet","text":""},{"location":"api/#autoverify.verifier.complete.verinet.Verinet","title":"Verinet","text":"

Bases: CompleteVerifier

VeriNet.

Source code in autoverify/verifier/complete/verinet/verifier.py
class Verinet(CompleteVerifier):\n\"\"\"VeriNet.\"\"\"\nname: str = \"verinet\"\nconfig_space: ConfigurationSpace = VerinetConfigspace\n# HACK: Quick hack to add some attributes to a VeriNet instance.\n# Ideally, these could be passed when calling `verify_property/instance`\n# or inside the Configuration.\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\ngpu_mode: bool = True,\ninput_shape: list[int] | None = None,\ndnnv_simplify: bool = False,\ntranspose_matmul_weights: bool = False,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"VeriNet CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._gpu_mode = gpu_mode\nself._input_shape = input_shape\nself._dnnv_simplify = dnnv_simplify\nself._transpose_matmul_weights = transpose_matmul_weights\n@property\ndef contexts(self) -> list[ContextManager[None]]:\nreturn [\ncwd(self.tool_path),\nenvironment(\nOPENBLAS_NUM_THREADS=\"1\",\nOMP_NUM_THREADS=\"1\",\nLD_LIBRARY_PATH=str(\nfind_conda_lib(self.conda_env_name, \"libcudart.so.11.0\")\n),\n),\npkill_matches(\n[\n\"multiprocessing.spawn\",\n\"multiprocessing.forkserver\",\n\"python cli.py\",  # TODO: Too broad\n]\n),\n]\ndef _parse_result(\nself,\noutput: str,\n_: Path | None,\n) -> tuple[VerificationResultString, str | None]:\nif find_substring(\"STATUS:  Status.Safe\", output):\nreturn \"UNSAT\", None\nelif find_substring(\"STATUS:  Status.Unsafe\", output):\nreturn \"SAT\", None\nelif find_substring(\"STATUS:  Status.Undecided\", output):\nreturn \"TIMEOUT\", None\nreturn \"TIMEOUT\", None\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Configuration,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\nsource_cmd = get_conda_source_cmd(get_conda_path())\ninput_shape = self._input_shape or get_input_shape(network)\n# params in order:\n# network, prop, timeout, config, input_shape, max_procs, gpu_mode,\n# dnnv_simplify\nrun_cmd = f\"\"\"\n{\" \".join(source_cmd)}\n        conda activate {self.conda_env_name}\n        python cli.py {str(network)} {str(property)} {timeout} \\\n{shlex.quote(str(dict(config)))} \\\n{shlex.quote(str(input_shape))} \\\n{-1} \\\n{self._gpu_mode} \\\n{self._dnnv_simplify} \\\n{self._transpose_matmul_weights} \\\n        \"\"\"\nreturn run_cmd, None\ndef _verify_batch(\nself,\ninstances: Iterable[Any],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\n# source_cmd = get_conda_source_cmd()\n# TODO:\nraise NotImplementedError(\"Batch verification not supported yet\")\n
"},{"location":"api/#autoverify.verifier.complete.verinet.Verinet.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None, gpu_mode=True, input_shape=None, dnnv_simplify=False, transpose_matmul_weights=False)","text":"

New instance.

Source code in autoverify/verifier/complete/verinet/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\ngpu_mode: bool = True,\ninput_shape: list[int] | None = None,\ndnnv_simplify: bool = False,\ntranspose_matmul_weights: bool = False,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"VeriNet CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._gpu_mode = gpu_mode\nself._input_shape = input_shape\nself._dnnv_simplify = dnnv_simplify\nself._transpose_matmul_weights = transpose_matmul_weights\n
"},{"location":"api/#portfolio","title":"Portfolio","text":"

Parallel portfolio.

Class to run parallel portfolio.

"},{"location":"api/#autoverify.portfolio.portfolio.ConfiguredVerifier","title":"ConfiguredVerifier dataclass","text":"

Class representing an interal configured verifier.

Attributes:

Name Type Description verifier str

Name of the verifier.

configuration Configuration

Its configuration.

resources tuple[int, int] | None

Number of cores and GPUs.

Source code in autoverify/portfolio/portfolio.py
@dataclass(frozen=True, eq=True, repr=True)\nclass ConfiguredVerifier:\n\"\"\"Class representing an interal configured verifier.\n    Attributes:\n        verifier: Name of the verifier.\n        configuration: Its configuration.\n        resources: Number of cores and GPUs.\n    \"\"\"\nverifier: str\nconfiguration: Configuration\nresources: tuple[int, int] | None = None\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio","title":"Portfolio","text":"

Bases: MutableSet[ConfiguredVerifier]

Portfolio of ConfiguredVerifiers.

Source code in autoverify/portfolio/portfolio.py
class Portfolio(MutableSet[ConfiguredVerifier]):\n\"\"\"Portfolio of ConfiguredVerifiers.\"\"\"\ndef __init__(self, *cvs: ConfiguredVerifier):\n\"\"\"Initialize a new PF with the passed verifiers.\"\"\"\nself._pf_set: set[ConfiguredVerifier] = set(cvs)\nself._costs: dict[str, float] = {}\ndef __contains__(self, cv: object):\n\"\"\"Check if a CV is in the PF.\"\"\"\n# cant type annotate the func arg or mypy gets mad\nassert isinstance(cv, ConfiguredVerifier)\nreturn cv in self._pf_set\ndef __iter__(self):\n\"\"\"Iterate the contents of the PF.\"\"\"\nreturn iter(self._pf_set)\ndef __len__(self):\n\"\"\"Number of CVs in the PF.\"\"\"\nreturn len(self._pf_set)\ndef __str__(self):\n\"\"\"String representation of the PF.\"\"\"\nres = \"\"\nfor cv in self:\nres += str(cv) + \"\\n\"\nreturn res\ndef get_set(self):\n\"\"\"Get the underlying set.\"\"\"\nreturn self._pf_set\n@property\ndef configs(self) -> list[Configuration]:\n\"\"\"All the configurations in the PF.\"\"\"\nconfigs = []\nfor cv in self._pf_set:\nconfigs.append(cv.configuration)\nreturn configs\ndef get_cost(self, instance: str):\n\"\"\"Get the currently known costs of an instance.\"\"\"\nreturn self._costs[instance]\ndef get_costs(self, instances: Iterable[str]) -> dict[str, float]:\n\"\"\"Get costs of more than one instance.\"\"\"\ncosts: dict[str, float] = {}\nfor inst in instances:\nif inst in self._costs:\ncosts[inst] = self._costs[inst]\nreturn costs\ndef get_all_costs(self) -> dict[str, float]:\n\"\"\"All the recorded costs.\"\"\"\nreturn self._costs\ndef get_mean_cost(self) -> float:\n\"\"\"Get the mean cost.\"\"\"\nreturn float(np.mean(list(self._costs.values())))\ndef get_median_cost(self) -> float:\n\"\"\"Get the median cost.\"\"\"\nreturn float(np.median(list(self._costs.values())))\ndef get_total_cost(self) -> float:\n\"\"\"Get the total cost.\"\"\"\nreturn float(np.sum(list(self._costs.values())))\ndef update_costs(self, costs: Mapping[str, float]):\n\"\"\"Upate the costs based on the new costs mapping.\"\"\"\nfor instance, cost in costs.items():\nif instance not in self._costs:\nself._costs[instance] = cost\ncontinue\nself._costs[instance] = min(self._costs[instance], cost)\ndef add(self, cv: ConfiguredVerifier):\n\"\"\"Add a CV to the PF, no duplicates allowed.\"\"\"\nif cv in self._pf_set:\nraise ValueError(f\"{cv} is already in the portfolio\")\nself._pf_set.add(cv)\ndef discard(self, cv: ConfiguredVerifier):\n\"\"\"Remove a CV from the PF.\"\"\"\nif cv not in self._pf_set:\nraise ValueError(f\"{cv} is not in the portfolio\")\nself._pf_set.discard(cv)\ndef reallocate_resources(\nself, strategy: ResourceStrategy = ResourceStrategy.Auto\n):\n\"\"\"Realloacte based on current contents and given strategy.\"\"\"\nif strategy != ResourceStrategy.Auto:\nraise NotImplementedError(\n\"Given `ResourceStrategy` is not supported yet.\"\n)\n# NOTE: Should put this alloc stuff in a function\nn_cores = cpu_count()\ncores_per = n_cores // len(self)\ncores_remainder = n_cores % len(self)\nfor cv in self:\nverifier = cv.verifier\nresources = cv.resources\nconfig = cv.configuration\nextra_core = 0\nif cores_remainder > 0:\nextra_core = 1\ncores_remainder -= 1\nnew_resources = (\n(cores_per + extra_core, resources[1]) if resources else None\n)\nself.discard(cv)\nself.add(ConfiguredVerifier(verifier, config, new_resources))\ndef to_json(self, out_json_path: Path):\n\"\"\"Write the portfolio in JSON format to the specified path.\"\"\"\njson_list: list[dict[str, Any]] = []\nfor cv in self._pf_set:\ncfg_dict = dict(cv.configuration)\nto_write = {\n\"verifier\": cv.verifier,\n\"configuration\": cfg_dict,\n\"resources\": cv.resources,\n}\njson_list.append(to_write)\nwith open(out_json_path, \"w\") as f:\njson.dump(json_list, f, indent=4, default=str)\n@classmethod\ndef from_json(\ncls,\njson_file: Path,\nconfig_space_map: Mapping[str, ConfigurationSpace] | None = None,\n) -> Portfolio:\n\"\"\"Instantiate a new Portfolio class from a JSON file.\"\"\"\nwith open(json_file.expanduser().resolve()) as f:\npf_json = json.load(f)\npf = Portfolio()\nfor cv in pf_json:\nif config_space_map is None:\ncfg_space = get_verifier_configspace(cv[\"verifier\"])\nelse:\ncfg_space = config_space_map[cv[\"verifier\"]]\ncv[\"configuration\"] = Configuration(cfg_space, cv[\"configuration\"])\nif cv[\"resources\"]:\ncv[\"resources\"] = tuple(cv[\"resources\"])\npf.add(ConfiguredVerifier(**cv))\nreturn pf\ndef str_compact(self) -> str:\n\"\"\"Get the portfolio in string form in a compact way.\"\"\"\ncvs: list[str] = []\nfor cv in self:\ncvs.append(\n\"\\t\".join(\n[\nstr(cv.verifier),\nstr(hash(cv.configuration)),\nstr(cv.resources),\n]\n)\n)\nreturn \"\\n\".join(cvs)\ndef dump_costs(self):\n\"\"\"Print the costs for each instance.\"\"\"\nfor instance, cost in self._costs.items():\nprint(instance, cost)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.configs","title":"configs: list[Configuration] property","text":"

All the configurations in the PF.

"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__contains__","title":"__contains__(cv)","text":"

Check if a CV is in the PF.

Source code in autoverify/portfolio/portfolio.py
def __contains__(self, cv: object):\n\"\"\"Check if a CV is in the PF.\"\"\"\n# cant type annotate the func arg or mypy gets mad\nassert isinstance(cv, ConfiguredVerifier)\nreturn cv in self._pf_set\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__init__","title":"__init__(*cvs)","text":"

Initialize a new PF with the passed verifiers.

Source code in autoverify/portfolio/portfolio.py
def __init__(self, *cvs: ConfiguredVerifier):\n\"\"\"Initialize a new PF with the passed verifiers.\"\"\"\nself._pf_set: set[ConfiguredVerifier] = set(cvs)\nself._costs: dict[str, float] = {}\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__iter__","title":"__iter__()","text":"

Iterate the contents of the PF.

Source code in autoverify/portfolio/portfolio.py
def __iter__(self):\n\"\"\"Iterate the contents of the PF.\"\"\"\nreturn iter(self._pf_set)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__len__","title":"__len__()","text":"

Number of CVs in the PF.

Source code in autoverify/portfolio/portfolio.py
def __len__(self):\n\"\"\"Number of CVs in the PF.\"\"\"\nreturn len(self._pf_set)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__str__","title":"__str__()","text":"

String representation of the PF.

Source code in autoverify/portfolio/portfolio.py
def __str__(self):\n\"\"\"String representation of the PF.\"\"\"\nres = \"\"\nfor cv in self:\nres += str(cv) + \"\\n\"\nreturn res\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.add","title":"add(cv)","text":"

Add a CV to the PF, no duplicates allowed.

Source code in autoverify/portfolio/portfolio.py
def add(self, cv: ConfiguredVerifier):\n\"\"\"Add a CV to the PF, no duplicates allowed.\"\"\"\nif cv in self._pf_set:\nraise ValueError(f\"{cv} is already in the portfolio\")\nself._pf_set.add(cv)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.discard","title":"discard(cv)","text":"

Remove a CV from the PF.

Source code in autoverify/portfolio/portfolio.py
def discard(self, cv: ConfiguredVerifier):\n\"\"\"Remove a CV from the PF.\"\"\"\nif cv not in self._pf_set:\nraise ValueError(f\"{cv} is not in the portfolio\")\nself._pf_set.discard(cv)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.dump_costs","title":"dump_costs()","text":"

Print the costs for each instance.

Source code in autoverify/portfolio/portfolio.py
def dump_costs(self):\n\"\"\"Print the costs for each instance.\"\"\"\nfor instance, cost in self._costs.items():\nprint(instance, cost)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.from_json","title":"from_json(json_file, config_space_map=None) classmethod","text":"

Instantiate a new Portfolio class from a JSON file.

Source code in autoverify/portfolio/portfolio.py
@classmethod\ndef from_json(\ncls,\njson_file: Path,\nconfig_space_map: Mapping[str, ConfigurationSpace] | None = None,\n) -> Portfolio:\n\"\"\"Instantiate a new Portfolio class from a JSON file.\"\"\"\nwith open(json_file.expanduser().resolve()) as f:\npf_json = json.load(f)\npf = Portfolio()\nfor cv in pf_json:\nif config_space_map is None:\ncfg_space = get_verifier_configspace(cv[\"verifier\"])\nelse:\ncfg_space = config_space_map[cv[\"verifier\"]]\ncv[\"configuration\"] = Configuration(cfg_space, cv[\"configuration\"])\nif cv[\"resources\"]:\ncv[\"resources\"] = tuple(cv[\"resources\"])\npf.add(ConfiguredVerifier(**cv))\nreturn pf\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_all_costs","title":"get_all_costs()","text":"

All the recorded costs.

Source code in autoverify/portfolio/portfolio.py
def get_all_costs(self) -> dict[str, float]:\n\"\"\"All the recorded costs.\"\"\"\nreturn self._costs\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_cost","title":"get_cost(instance)","text":"

Get the currently known costs of an instance.

Source code in autoverify/portfolio/portfolio.py
def get_cost(self, instance: str):\n\"\"\"Get the currently known costs of an instance.\"\"\"\nreturn self._costs[instance]\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_costs","title":"get_costs(instances)","text":"

Get costs of more than one instance.

Source code in autoverify/portfolio/portfolio.py
def get_costs(self, instances: Iterable[str]) -> dict[str, float]:\n\"\"\"Get costs of more than one instance.\"\"\"\ncosts: dict[str, float] = {}\nfor inst in instances:\nif inst in self._costs:\ncosts[inst] = self._costs[inst]\nreturn costs\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_mean_cost","title":"get_mean_cost()","text":"

Get the mean cost.

Source code in autoverify/portfolio/portfolio.py
def get_mean_cost(self) -> float:\n\"\"\"Get the mean cost.\"\"\"\nreturn float(np.mean(list(self._costs.values())))\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_median_cost","title":"get_median_cost()","text":"

Get the median cost.

Source code in autoverify/portfolio/portfolio.py
def get_median_cost(self) -> float:\n\"\"\"Get the median cost.\"\"\"\nreturn float(np.median(list(self._costs.values())))\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_set","title":"get_set()","text":"

Get the underlying set.

Source code in autoverify/portfolio/portfolio.py
def get_set(self):\n\"\"\"Get the underlying set.\"\"\"\nreturn self._pf_set\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_total_cost","title":"get_total_cost()","text":"

Get the total cost.

Source code in autoverify/portfolio/portfolio.py
def get_total_cost(self) -> float:\n\"\"\"Get the total cost.\"\"\"\nreturn float(np.sum(list(self._costs.values())))\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.reallocate_resources","title":"reallocate_resources(strategy=ResourceStrategy.Auto)","text":"

Realloacte based on current contents and given strategy.

Source code in autoverify/portfolio/portfolio.py
def reallocate_resources(\nself, strategy: ResourceStrategy = ResourceStrategy.Auto\n):\n\"\"\"Realloacte based on current contents and given strategy.\"\"\"\nif strategy != ResourceStrategy.Auto:\nraise NotImplementedError(\n\"Given `ResourceStrategy` is not supported yet.\"\n)\n# NOTE: Should put this alloc stuff in a function\nn_cores = cpu_count()\ncores_per = n_cores // len(self)\ncores_remainder = n_cores % len(self)\nfor cv in self:\nverifier = cv.verifier\nresources = cv.resources\nconfig = cv.configuration\nextra_core = 0\nif cores_remainder > 0:\nextra_core = 1\ncores_remainder -= 1\nnew_resources = (\n(cores_per + extra_core, resources[1]) if resources else None\n)\nself.discard(cv)\nself.add(ConfiguredVerifier(verifier, config, new_resources))\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.str_compact","title":"str_compact()","text":"

Get the portfolio in string form in a compact way.

Source code in autoverify/portfolio/portfolio.py
def str_compact(self) -> str:\n\"\"\"Get the portfolio in string form in a compact way.\"\"\"\ncvs: list[str] = []\nfor cv in self:\ncvs.append(\n\"\\t\".join(\n[\nstr(cv.verifier),\nstr(hash(cv.configuration)),\nstr(cv.resources),\n]\n)\n)\nreturn \"\\n\".join(cvs)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.to_json","title":"to_json(out_json_path)","text":"

Write the portfolio in JSON format to the specified path.

Source code in autoverify/portfolio/portfolio.py
def to_json(self, out_json_path: Path):\n\"\"\"Write the portfolio in JSON format to the specified path.\"\"\"\njson_list: list[dict[str, Any]] = []\nfor cv in self._pf_set:\ncfg_dict = dict(cv.configuration)\nto_write = {\n\"verifier\": cv.verifier,\n\"configuration\": cfg_dict,\n\"resources\": cv.resources,\n}\njson_list.append(to_write)\nwith open(out_json_path, \"w\") as f:\njson.dump(json_list, f, indent=4, default=str)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.update_costs","title":"update_costs(costs)","text":"

Upate the costs based on the new costs mapping.

Source code in autoverify/portfolio/portfolio.py
def update_costs(self, costs: Mapping[str, float]):\n\"\"\"Upate the costs based on the new costs mapping.\"\"\"\nfor instance, cost in costs.items():\nif instance not in self._costs:\nself._costs[instance] = cost\ncontinue\nself._costs[instance] = min(self._costs[instance], cost)\n
"},{"location":"api/#autoverify.portfolio.portfolio.PortfolioScenario","title":"PortfolioScenario dataclass","text":"

Scenario for constructing a parallel portfolio.

Attributes:

Name Type Description verifiers Sequence[str]

The name of the verifiers to consider.

resources list[tuple[str, int, int]]

How many cores and GPUs the verifiers need.

instances Sequence[VerificationInstance]

The instances on which the PF is constructed.

length int

The max length of the PF.

seconds_per_iter float

Number of seconds for each Hydra iteration.

configs_per_iter int

Number of configs each iteration.

alpha float

Tune/Pick time split.

added_per_iter int

Entries added to the PF per iter.

stop_early bool

Stop procedure if some early stop conditions are met.

resource_strategy ResourceStrategy

Strat to divide the resources.

output_dir Path | None

Dir where logs are stored.

vnn_compat_mode bool

Use vnn compatability for some verifiers.

benchmark str | None

VNNCOMP benchmark if vnn_compat_mode is True.

verifier_kwargs dict[str, dict[str, Any]] | None

Kwargs passed to verifiers.

uses_simplified_network Iterable[str] | None

If the network uses the dnnv simplified nets.

Source code in autoverify/portfolio/portfolio.py
@dataclass\nclass PortfolioScenario:\n\"\"\"Scenario for constructing a parallel portfolio.\n    Attributes:\n        verifiers: The name of the verifiers to consider.\n        resources: How many cores and GPUs the verifiers need.\n        instances: The instances on which the PF is constructed.\n        length: The max length of the PF.\n        seconds_per_iter: Number of seconds for each Hydra iteration.\n        configs_per_iter: Number of configs each iteration.\n        alpha: Tune/Pick time split.\n        added_per_iter: Entries added to the PF per iter.\n        stop_early: Stop procedure if some early stop conditions are met.\n        resource_strategy: Strat to divide the resources.\n        output_dir: Dir where logs are stored.\n        vnn_compat_mode: Use vnn compatability for some verifiers.\n        benchmark: VNNCOMP benchmark if vnn_compat_mode is `True`.\n        verifier_kwargs: Kwargs passed to verifiers.\n        uses_simplified_network: If the network uses the dnnv simplified nets.\n    \"\"\"\nverifiers: Sequence[str]\nresources: list[tuple[str, int, int]]\ninstances: Sequence[VerificationInstance]\nlength: int  # TODO: Rename to max_length?\nseconds_per_iter: float\n# Optional\nconfigs_per_iter: int = 1\nalpha: float = 0.5  # tune/pick split\nadded_per_iter: int = 1\nstop_early: bool = True\nresource_strategy: ResourceStrategy = ResourceStrategy.Auto\noutput_dir: Path | None = None\nvnn_compat_mode: bool = False\nbenchmark: str | None = None\nverifier_kwargs: dict[str, dict[str, Any]] | None = None\nuses_simplified_network: Iterable[str] | None = None\ndef __post_init__(self):\n\"\"\"Validate the PF scenario.\"\"\"\nif self.added_per_iter > 1 or self.configs_per_iter > 1:\nraise ValueError(\n\"Adding more than 1 config per iter not supported yet.\"\n)\nif not 0 <= self.alpha <= 1:\nraise ValueError(f\"Alpha should be in [0.0, 1.0], got {self.alpha}\")\nif self.output_dir is None:\ncurrent_time = datetime.datetime.now()\nformatted_time = current_time.strftime(\"%Y-%m-%d %H:%M:%S\")\nself.output_dir = Path(f\"hydra_out/{formatted_time}\")\nself.tune_budget = self.alpha\nself.pick_budget = 1 - self.alpha\nif self.added_per_iter > self.length:\nraise ValueError(\"Entries added per iter should be <= length\")\nif self.vnn_compat_mode:\nif not self.benchmark:\nraise ValueError(\"Use a benchmark name if vnn_compat_mode=True\")\nif self.vnn_compat_mode and self.verifier_kwargs:\nraise ValueError(\n\"Cannot use vnn_compat_mode and \"\n\"verifier_kwargs at the same time.\"\n)\nself.n_iters = math.ceil(self.length / self.added_per_iter)\nself._verify_resources()\ndef _verify_resources(self):\n# Check for duplicates\nseen = set()\nfor r in self.resources:\nif r[0] in seen:\nraise ValueError(f\"Duplicate name '{r[0]}' in resources\")\nif r[0] not in self.verifiers:\nraise ValueError(f\"{r[0]} in resources but not in verifiers.\")\nseen.add(r[0])\nfor v in self.verifiers:\nif v not in seen:\nraise ValueError(\nf\"Verifier '{v}' in verifiers but not in resources.\"\n)\nif self.resource_strategy == ResourceStrategy.Auto:\nfor r in self.resources:\nif r[1] != 0:\nraise ValueError(\n\"CPU resources should be 0 when using `Auto`\"\n)\nelse:\nraise NotImplementedError(\nf\"ResourceStrategy {self.resource_strategy} \"\nf\"is not implemented yet.\"\n)\ndef get_smac_scenario_kwargs(self) -> dict[str, Any]:\n\"\"\"Return the SMAC scenario kwargs as a dict.\n        Returns:\n            dict[str, Any]: The SMAC scenario as a dict.\n        \"\"\"\nassert self.output_dir is not None  # This is set in `__post_init__`\nself.output_dir.mkdir(parents=True, exist_ok=True)\nreturn {\n\"instances\": verification_instances_to_smac_instances(\nself.instances\n),\n\"instance_features\": index_features(self.instances),\n\"output_directory\": self.output_dir,\n}\ndef get_smac_instances(self) -> list[str]:\n\"\"\"Get the instances of the scenario as SMAC instances.\n        Returns:\n            list[str]: The SMAC instances.\n        \"\"\"\nreturn verification_instances_to_smac_instances(self.instances)\n
"},{"location":"api/#autoverify.portfolio.portfolio.PortfolioScenario.__post_init__","title":"__post_init__()","text":"

Validate the PF scenario.

Source code in autoverify/portfolio/portfolio.py
def __post_init__(self):\n\"\"\"Validate the PF scenario.\"\"\"\nif self.added_per_iter > 1 or self.configs_per_iter > 1:\nraise ValueError(\n\"Adding more than 1 config per iter not supported yet.\"\n)\nif not 0 <= self.alpha <= 1:\nraise ValueError(f\"Alpha should be in [0.0, 1.0], got {self.alpha}\")\nif self.output_dir is None:\ncurrent_time = datetime.datetime.now()\nformatted_time = current_time.strftime(\"%Y-%m-%d %H:%M:%S\")\nself.output_dir = Path(f\"hydra_out/{formatted_time}\")\nself.tune_budget = self.alpha\nself.pick_budget = 1 - self.alpha\nif self.added_per_iter > self.length:\nraise ValueError(\"Entries added per iter should be <= length\")\nif self.vnn_compat_mode:\nif not self.benchmark:\nraise ValueError(\"Use a benchmark name if vnn_compat_mode=True\")\nif self.vnn_compat_mode and self.verifier_kwargs:\nraise ValueError(\n\"Cannot use vnn_compat_mode and \"\n\"verifier_kwargs at the same time.\"\n)\nself.n_iters = math.ceil(self.length / self.added_per_iter)\nself._verify_resources()\n
"},{"location":"api/#autoverify.portfolio.portfolio.PortfolioScenario.get_smac_instances","title":"get_smac_instances()","text":"

Get the instances of the scenario as SMAC instances.

Returns:

Type Description list[str]

list[str]: The SMAC instances.

Source code in autoverify/portfolio/portfolio.py
def get_smac_instances(self) -> list[str]:\n\"\"\"Get the instances of the scenario as SMAC instances.\n    Returns:\n        list[str]: The SMAC instances.\n    \"\"\"\nreturn verification_instances_to_smac_instances(self.instances)\n
"},{"location":"api/#autoverify.portfolio.portfolio.PortfolioScenario.get_smac_scenario_kwargs","title":"get_smac_scenario_kwargs()","text":"

Return the SMAC scenario kwargs as a dict.

Returns:

Type Description dict[str, Any]

dict[str, Any]: The SMAC scenario as a dict.

Source code in autoverify/portfolio/portfolio.py
def get_smac_scenario_kwargs(self) -> dict[str, Any]:\n\"\"\"Return the SMAC scenario kwargs as a dict.\n    Returns:\n        dict[str, Any]: The SMAC scenario as a dict.\n    \"\"\"\nassert self.output_dir is not None  # This is set in `__post_init__`\nself.output_dir.mkdir(parents=True, exist_ok=True)\nreturn {\n\"instances\": verification_instances_to_smac_instances(\nself.instances\n),\n\"instance_features\": index_features(self.instances),\n\"output_directory\": self.output_dir,\n}\n
"},{"location":"api/#autoverify.portfolio.portfolio_runner.PortfolioRunner","title":"PortfolioRunner","text":"

Class to run a portfolio in parallel.

Source code in autoverify/portfolio/portfolio_runner.py
class PortfolioRunner:\n\"\"\"Class to run a portfolio in parallel.\"\"\"\ndef __init__(\nself,\nportfolio: Portfolio,\nvbs_mode: bool = False,\n*,\nn_cpu: int | None = None,\nn_gpu: int | None = None,\n):\n\"\"\"Initialize a new portfolio runner.\n        Arguments:\n            portfolio: The portfolio that will be run.\n            vbs_mode: If the PF will be run in VBS mode.\n            n_cpu: Override number of CPUs\n            n_gpu: Override number of GPUs.\n        \"\"\"\nself._portfolio = portfolio\nself._vbs_mode = vbs_mode\nself._n_cpu = n_cpu\nself._n_gpu = n_gpu\nif not self._vbs_mode:\nself._init_resources()\nself._is_cleaning = False\ndef _wrap_cleanup(*_):\nif self._is_cleaning:\nreturn\nself._is_cleaning = True\nself._cleanup()\nself._is_cleaning = False\nsignal.signal(signal.SIGINT, _wrap_cleanup)\nsignal.signal(signal.SIGTERM, _wrap_cleanup)\ndef _init_resources(self):\nself._allocation: dict[ConfiguredVerifier, tuple[int, int, int]] = {}\nif self._n_cpu:\ncpu_left = self._n_cpu\nelse:\ncpu_left = cpu_count()\nif self._n_gpu:\ngpu_left = self._n_gpu\nelse:\ngpu_left = nvidia_gpu_count()\nfor cv in self._portfolio:\nif cv.resources is None:\nraise ValueError(\n\"No resources for\"\nf\"{cv.verifier} :: {cv.configuration} found.\"\n)\n# CPU (taskset) and GPU (CUDA_VISIBLE_DEVICES) index start at 0\nn_cpu, n_gpu = cv.resources[0], cv.resources[1]\nif n_gpu > gpu_left:\nraise RuntimeError(\"No more GPUs left\")\nif n_cpu > cpu_left:\nraise RuntimeError(\"No more CPUs left\")\nif n_cpu <= 0:\nraise RuntimeError(\"CPUs should be > 0\")\n# Currently only support 1 GPU per verifier\nif n_gpu > 0:\ncurr_gpu = gpu_left - 1\ngpu_left -= 1\nelse:\ncurr_gpu = -1\ncpu_high = cpu_left\ncpu_low = cpu_left - n_cpu\ncpu_left = cpu_low\nself._allocation[cv] = (cpu_low, cpu_high - 1, curr_gpu)\ndef evaluate_vbs(\nself,\ninstances: list[VerificationInstance],\n*,\nout_csv: Path | None = None,\nvnncompat: bool = False,\nbenchmark: str | None = None,\n) -> _VbsResult:\n\"\"\"Evaluate the PF in vbs mode.\n        Arguments:\n            instances: Instances to evaluate.\n            out_csv: File where the results are written to.\n            vnncompat: Use some compat kwargs.\n            benchmark: Only if vnncompat, benchmark name.\n        \"\"\"\nresults: _CostDict = {}\nif vnncompat and benchmark is None:\nraise ValueError(\"Need a benchmark if vnncompat=True\")\nelif not vnncompat and benchmark is not None:\nraise ValueError(\"Only use benchmark if vnncompat=True\")\nfor cv in self._portfolio:\nassert cv.resources is not None\nfor instance in instances:\nverifier = _get_verifier(instance, cv, vnncompat, benchmark)\nlogger.info(f\"{cv.verifier} on {str(instance)}\")\nresult = verifier.verify_instance(instance)\nself._add_result(cv, instance, result, results)\nif out_csv:\nself._csv_log_result(\nout_csv,\nresult,\ninstance,\ncv.verifier,\ncv.configuration,\n)\nreturn self._vbs_from_cost_dict(results)\n@staticmethod\ndef _csv_log_result(\nout_csv: Path,\nresult: CompleteVerificationResult,\ninstance: VerificationInstance,\nverifier: str,\nconfiguration: Configuration,\n):\nif isinstance(result, Ok):\nres_d = result.unwrap()\nsuccess = \"OK\"\nelif isinstance(result, Err):\nres_d = result.unwrap_err()\nsuccess = \"ERR\"\ninst_d = {\n\"network\": instance.network,\n\"property\": instance.property,\n\"timeout\": instance.timeout,\n\"verifier\": verifier,\n\"config\": configuration,\n\"success\": success,\n}\nif not out_csv.exists():\ninit_verification_result_csv(out_csv)\nvdr = VerificationDataResult.from_verification_result(res_d, inst_d)\ncsv_append_verification_result(vdr, out_csv)\n@staticmethod\ndef _vbs_from_cost_dict(cost_dict: _CostDict) -> _VbsResult:\nvbs: _VbsResult = {}\nfor cv, instance_costs in cost_dict.items():\nfor instance, cost in instance_costs.items():\nstr_inst = str(instance)\nif str_inst not in vbs:\nvbs[str_inst] = (cost, cv.verifier)\ncontinue\nif cost < vbs[str_inst][0]:\nvbs[str_inst] = (cost, cv.verifier)\nreturn vbs\n@staticmethod\ndef _add_result(\ncv: ConfiguredVerifier,\ninstance: VerificationInstance,\nresult: CompleteVerificationResult,\nresults: _CostDict,\n):\ncost: float\nif isinstance(result, Ok):\ncost = result.unwrap().took\nelif isinstance(result, Err):\ncost = float(\"inf\")\nlogger.info(f\"Cost: {cost}\")\nif cv not in results:\nresults[cv] = {}\nresults[cv][instance] = cost\n# TODO: Arg types\ndef _get_verifiers(\nself,\ninstance,\nvnncompat,\nbenchmark,\nverifier_kwargs,\n) -> dict[ConfiguredVerifier, CompleteVerifier]:\nverifiers: dict[ConfiguredVerifier, CompleteVerifier] = {}\nfor cv in self._portfolio:\nv = _get_verifier(\ninstance,\ncv,\nvnncompat,\nbenchmark,\nverifier_kwargs,\nself._allocation,\n)\nassert isinstance(v, CompleteVerifier)\nverifiers[cv] = v\nreturn verifiers\ndef verify_instances(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nout_csv: Path | None = None,\nvnncompat: bool = False,\nbenchmark: str | None = None,\nverifier_kwargs: dict[str, dict[str, Any]] | None = None,\nuses_simplified_network: Iterable[str] | None = None,\n) -> dict[VerificationInstance, VerificationDataResult]:\n\"\"\"Run the PF in parallel.\n        Arguments:\n            instances: Instances to evaluate.\n            out_csv: File where the results are written to.\n            vnncompat: Use some compat kwargs.\n            benchmark: Only if vnncompat, benchmark name.\n            verifier_kwargs: Kwargs passed to verifiers.\n            uses_simplified_network: Have some verifiers use simplified nets.\n        \"\"\"\nif self._vbs_mode:\nraise RuntimeError(\"Function not compatible with vbs_mode\")\nif out_csv:\nout_csv = out_csv.expanduser().resolve()\nresults: dict[VerificationInstance, VerificationDataResult] = {}\nwith concurrent.futures.ThreadPoolExecutor() as executor:\nfor instance in instances:\nlogger.info(f\"Running portfolio on {str(instance)}\")\nfutures: dict[\nFuture[CompleteVerificationResult], ConfiguredVerifier\n] = {}\nself._verifiers = self._get_verifiers(\ninstance,\nvnncompat,\nbenchmark,\nverifier_kwargs,\n)\nis_solved = False\nfor cv in self._portfolio:\nif (\nuses_simplified_network\nand cv.verifier in uses_simplified_network\n):\ntarget_instance = instance.as_simplified_network()\nelse:\ntarget_instance = instance\nfuture = executor.submit(\nself._verifiers[cv].verify_instance, target_instance\n)\nfutures[future] = cv\nfor future in concurrent.futures.as_completed(futures):\nfut_cv = futures[future]\nresult = future.result()\nif not is_solved:\ngot_solved = self._process_result(\nresult, results, fut_cv, instance, self._verifiers\n)\nif got_solved and not is_solved:\nis_solved = True\nif out_csv:\nself._csv_log_result(\nout_csv,\nresult,\ninstance,\nfut_cv.verifier,\nfut_cv.configuration,\n)\nreturn results\ndef _process_result(\nself,\nresult: CompleteVerificationResult,\nresults: dict[VerificationInstance, VerificationDataResult],\ncv: ConfiguredVerifier,\ninstance: VerificationInstance,\nverifiers: dict[ConfiguredVerifier, CompleteVerifier],\n) -> bool:\ninstance_data: dict[str, Any] = {\n\"network\": instance.network,\n\"property\": instance.property,\n\"timeout\": instance.timeout,\n\"verifier\": cv.verifier,\n\"config\": cv.configuration,\n}\nif isinstance(result, Ok):\nr = result.unwrap()\nif r.result == \"TIMEOUT\":\nlog_string = f\"{cv.verifier} timed out\"\nelse:\nself._cancel_running(cv, verifiers)\nlog_string = (\nf\"Verified by {cv.verifier} in {r.took:.2f} sec, \"\nf\"result = {r.result}\"\n)\ninstance_data[\"success\"] = \"OK\"\nresults[instance] = VerificationDataResult.from_verification_result(\nr, instance_data\n)\n# Signal that the instance was solved\nif r.result in [\"SAT\", \"UNSAT\"]:\nlogger.info(log_string)\nreturn True\nelif isinstance(result, Err):\nlog_string = f\"{cv.verifier} errored.\"\nr = result.unwrap_err()\ninstance_data[\"success\"] = \"ERR\"\nresults[instance] = VerificationDataResult.from_verification_result(\nr, instance_data\n)\nlogger.info(log_string)\n# Instance was not solved\nreturn False\ndef _cancel_running(\nself,\nfirst_cv: ConfiguredVerifier,\nverifiers: dict[ConfiguredVerifier, CompleteVerifier],\n):\nothers = set_iter_except(self._portfolio.get_set(), first_cv)\nfor other_cv in others:\nverifiers[other_cv].set_timeout_event()\ndef _cleanup(self):\n\"\"\"Kill all running verifiers processes.\"\"\"\nif not self._verifiers:\nsys.exit(0)\nfor verifier_inst in self._verifiers.values():\ntry:\nverifier_inst.set_timeout_event()\nfinally:\npass\nsys.exit(0)\n
"},{"location":"api/#autoverify.portfolio.portfolio_runner.PortfolioRunner.__init__","title":"__init__(portfolio, vbs_mode=False, *, n_cpu=None, n_gpu=None)","text":"

Initialize a new portfolio runner.

Parameters:

Name Type Description Default portfolio Portfolio

The portfolio that will be run.

required vbs_mode bool

If the PF will be run in VBS mode.

False n_cpu int | None

Override number of CPUs

None n_gpu int | None

Override number of GPUs.

None Source code in autoverify/portfolio/portfolio_runner.py
def __init__(\nself,\nportfolio: Portfolio,\nvbs_mode: bool = False,\n*,\nn_cpu: int | None = None,\nn_gpu: int | None = None,\n):\n\"\"\"Initialize a new portfolio runner.\n    Arguments:\n        portfolio: The portfolio that will be run.\n        vbs_mode: If the PF will be run in VBS mode.\n        n_cpu: Override number of CPUs\n        n_gpu: Override number of GPUs.\n    \"\"\"\nself._portfolio = portfolio\nself._vbs_mode = vbs_mode\nself._n_cpu = n_cpu\nself._n_gpu = n_gpu\nif not self._vbs_mode:\nself._init_resources()\nself._is_cleaning = False\ndef _wrap_cleanup(*_):\nif self._is_cleaning:\nreturn\nself._is_cleaning = True\nself._cleanup()\nself._is_cleaning = False\nsignal.signal(signal.SIGINT, _wrap_cleanup)\nsignal.signal(signal.SIGTERM, _wrap_cleanup)\n
"},{"location":"api/#autoverify.portfolio.portfolio_runner.PortfolioRunner.evaluate_vbs","title":"evaluate_vbs(instances, *, out_csv=None, vnncompat=False, benchmark=None)","text":"

Evaluate the PF in vbs mode.

Parameters:

Name Type Description Default instances list[VerificationInstance]

Instances to evaluate.

required out_csv Path | None

File where the results are written to.

None vnncompat bool

Use some compat kwargs.

False benchmark str | None

Only if vnncompat, benchmark name.

None Source code in autoverify/portfolio/portfolio_runner.py
def evaluate_vbs(\nself,\ninstances: list[VerificationInstance],\n*,\nout_csv: Path | None = None,\nvnncompat: bool = False,\nbenchmark: str | None = None,\n) -> _VbsResult:\n\"\"\"Evaluate the PF in vbs mode.\n    Arguments:\n        instances: Instances to evaluate.\n        out_csv: File where the results are written to.\n        vnncompat: Use some compat kwargs.\n        benchmark: Only if vnncompat, benchmark name.\n    \"\"\"\nresults: _CostDict = {}\nif vnncompat and benchmark is None:\nraise ValueError(\"Need a benchmark if vnncompat=True\")\nelif not vnncompat and benchmark is not None:\nraise ValueError(\"Only use benchmark if vnncompat=True\")\nfor cv in self._portfolio:\nassert cv.resources is not None\nfor instance in instances:\nverifier = _get_verifier(instance, cv, vnncompat, benchmark)\nlogger.info(f\"{cv.verifier} on {str(instance)}\")\nresult = verifier.verify_instance(instance)\nself._add_result(cv, instance, result, results)\nif out_csv:\nself._csv_log_result(\nout_csv,\nresult,\ninstance,\ncv.verifier,\ncv.configuration,\n)\nreturn self._vbs_from_cost_dict(results)\n
"},{"location":"api/#autoverify.portfolio.portfolio_runner.PortfolioRunner.verify_instances","title":"verify_instances(instances, *, out_csv=None, vnncompat=False, benchmark=None, verifier_kwargs=None, uses_simplified_network=None)","text":"

Run the PF in parallel.

Parameters:

Name Type Description Default instances Iterable[VerificationInstance]

Instances to evaluate.

required out_csv Path | None

File where the results are written to.

None vnncompat bool

Use some compat kwargs.

False benchmark str | None

Only if vnncompat, benchmark name.

None verifier_kwargs dict[str, dict[str, Any]] | None

Kwargs passed to verifiers.

None uses_simplified_network Iterable[str] | None

Have some verifiers use simplified nets.

None Source code in autoverify/portfolio/portfolio_runner.py
def verify_instances(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nout_csv: Path | None = None,\nvnncompat: bool = False,\nbenchmark: str | None = None,\nverifier_kwargs: dict[str, dict[str, Any]] | None = None,\nuses_simplified_network: Iterable[str] | None = None,\n) -> dict[VerificationInstance, VerificationDataResult]:\n\"\"\"Run the PF in parallel.\n    Arguments:\n        instances: Instances to evaluate.\n        out_csv: File where the results are written to.\n        vnncompat: Use some compat kwargs.\n        benchmark: Only if vnncompat, benchmark name.\n        verifier_kwargs: Kwargs passed to verifiers.\n        uses_simplified_network: Have some verifiers use simplified nets.\n    \"\"\"\nif self._vbs_mode:\nraise RuntimeError(\"Function not compatible with vbs_mode\")\nif out_csv:\nout_csv = out_csv.expanduser().resolve()\nresults: dict[VerificationInstance, VerificationDataResult] = {}\nwith concurrent.futures.ThreadPoolExecutor() as executor:\nfor instance in instances:\nlogger.info(f\"Running portfolio on {str(instance)}\")\nfutures: dict[\nFuture[CompleteVerificationResult], ConfiguredVerifier\n] = {}\nself._verifiers = self._get_verifiers(\ninstance,\nvnncompat,\nbenchmark,\nverifier_kwargs,\n)\nis_solved = False\nfor cv in self._portfolio:\nif (\nuses_simplified_network\nand cv.verifier in uses_simplified_network\n):\ntarget_instance = instance.as_simplified_network()\nelse:\ntarget_instance = instance\nfuture = executor.submit(\nself._verifiers[cv].verify_instance, target_instance\n)\nfutures[future] = cv\nfor future in concurrent.futures.as_completed(futures):\nfut_cv = futures[future]\nresult = future.result()\nif not is_solved:\ngot_solved = self._process_result(\nresult, results, fut_cv, instance, self._verifiers\n)\nif got_solved and not is_solved:\nis_solved = True\nif out_csv:\nself._csv_log_result(\nout_csv,\nresult,\ninstance,\nfut_cv.verifier,\nfut_cv.configuration,\n)\nreturn results\n
"},{"location":"api/#util","title":"Util","text":"

summary.

VerificationInstance.

Verifier VNNCOMP compatability.

Return verifier instances that should be compatible with the given benchmark + instance.

"},{"location":"api/#autoverify.util.instances.VerificationDataResult","title":"VerificationDataResult dataclass","text":"

summary.

Source code in autoverify/util/instances.py
@dataclass\nclass VerificationDataResult:\n\"\"\"_summary_.\"\"\"\nnetwork: str\nproperty: str\ntimeout: int | None\nverifier: str\nconfig: str\nsuccess: Literal[\"OK\", \"ERR\"]\nresult: VerificationResultString\ntook: float\ncounter_example: str | tuple[str, str] | None\nstderr: str | None\nstdout: str | None\ndef __post_init__(self):\n\"\"\"_summary_.\"\"\"\nif self.config == \"None\":\nself.config = \"default\"\ndef as_csv_row(self) -> list[str]:\n\"\"\"Convert data to a csv row writeable.\"\"\"\nif isinstance(self.counter_example, tuple):\nself.counter_example = \"\\n\".join(self.counter_example)\nreturn [\nself.network,\nself.property,\nstr(self.timeout),\nself.verifier,\nself.config,\nself.success,\nself.result,\nstr(self.took),\nself.counter_example or \"\",\nself.stderr or \"\",\nself.stdout or \"\",\n]\n@classmethod\ndef from_verification_result(\ncls,\nverif_res: CompleteVerificationData,\ninstance_data: dict[str, Any],  # TODO: Narrow Any\n):\n\"\"\"Create from a `CompleteVerificationData`.\"\"\"\n# TODO: Unpack dict\nreturn cls(\ninstance_data[\"network\"],\ninstance_data[\"property\"],\ninstance_data[\"timeout\"],\ninstance_data[\"verifier\"],\ninstance_data[\"config\"],\ninstance_data[\"success\"],\nverif_res.result,\nverif_res.took,\nverif_res.counter_example,\nverif_res.err,\nverif_res.stdout,\n)\n
"},{"location":"api/#autoverify.util.instances.VerificationDataResult.__post_init__","title":"__post_init__()","text":"

summary.

Source code in autoverify/util/instances.py
def __post_init__(self):\n\"\"\"_summary_.\"\"\"\nif self.config == \"None\":\nself.config = \"default\"\n
"},{"location":"api/#autoverify.util.instances.VerificationDataResult.as_csv_row","title":"as_csv_row()","text":"

Convert data to a csv row writeable.

Source code in autoverify/util/instances.py
def as_csv_row(self) -> list[str]:\n\"\"\"Convert data to a csv row writeable.\"\"\"\nif isinstance(self.counter_example, tuple):\nself.counter_example = \"\\n\".join(self.counter_example)\nreturn [\nself.network,\nself.property,\nstr(self.timeout),\nself.verifier,\nself.config,\nself.success,\nself.result,\nstr(self.took),\nself.counter_example or \"\",\nself.stderr or \"\",\nself.stdout or \"\",\n]\n
"},{"location":"api/#autoverify.util.instances.VerificationDataResult.from_verification_result","title":"from_verification_result(verif_res, instance_data) classmethod","text":"

Create from a CompleteVerificationData.

Source code in autoverify/util/instances.py
@classmethod\ndef from_verification_result(\ncls,\nverif_res: CompleteVerificationData,\ninstance_data: dict[str, Any],  # TODO: Narrow Any\n):\n\"\"\"Create from a `CompleteVerificationData`.\"\"\"\n# TODO: Unpack dict\nreturn cls(\ninstance_data[\"network\"],\ninstance_data[\"property\"],\ninstance_data[\"timeout\"],\ninstance_data[\"verifier\"],\ninstance_data[\"config\"],\ninstance_data[\"success\"],\nverif_res.result,\nverif_res.took,\nverif_res.counter_example,\nverif_res.err,\nverif_res.stdout,\n)\n
"},{"location":"api/#autoverify.util.instances.csv_append_verification_result","title":"csv_append_verification_result(verification_result, csv_path)","text":"

summary.

Source code in autoverify/util/instances.py
def csv_append_verification_result(\nverification_result: VerificationDataResult, csv_path: Path\n):\n\"\"\"_summary_.\"\"\"\nwith open(str(csv_path.expanduser()), \"a\") as csv_file:\nwriter = csv.writer(csv_file)\nwriter.writerow(verification_result.as_csv_row())\n
"},{"location":"api/#autoverify.util.instances.init_verification_result_csv","title":"init_verification_result_csv(csv_path)","text":"

summary.

Source code in autoverify/util/instances.py
def init_verification_result_csv(csv_path: Path):\n\"\"\"_summary_.\"\"\"\nwith open(str(csv_path.expanduser()), \"w\") as csv_file:\nwriter = csv.writer(csv_file)\nwriter.writerow(get_dataclass_field_names(VerificationDataResult))\n
"},{"location":"api/#autoverify.util.instances.read_all_vnncomp_instances","title":"read_all_vnncomp_instances(vnncomp_path)","text":"

Reads all benchmarks, see the read_vnncomp_instances docstring.

Source code in autoverify/util/instances.py
def read_all_vnncomp_instances(\nvnncomp_path: Path,\n) -> dict[str, list[VerificationInstance]]:\n\"\"\"Reads all benchmarks, see the `read_vnncomp_instances` docstring.\"\"\"\ninstances: dict[str, list[VerificationInstance]] = {}\nfor path in vnncomp_path.iterdir():\nif not path.is_dir():\ncontinue\ninstances[path.name] = read_vnncomp_instances(path.name, vnncomp_path)\nreturn instances\n
"},{"location":"api/#autoverify.util.instances.read_verification_result_from_csv","title":"read_verification_result_from_csv(csv_path)","text":"

Reads a verification results csv to a list of its dataclass.

Source code in autoverify/util/instances.py
def read_verification_result_from_csv(\ncsv_path: Path,\n) -> list[VerificationDataResult]:\n\"\"\"Reads a verification results csv to a list of its dataclass.\"\"\"\nresults_df = pd.read_csv(csv_path)\nverification_results: list[VerificationDataResult] = []\nfor _, row in results_df.iterrows():\nrow = row.to_list()\nverification_results.append(VerificationDataResult(*row))\nreturn verification_results\n
"},{"location":"api/#autoverify.util.instances.read_vnncomp_instances","title":"read_vnncomp_instances(benchmark, vnncomp_path, *, predicate=None, as_smac=False, resolve_paths=True, instance_file_name='instances.csv')","text":"

Read the instances of a VNNCOMP benchmark.

Reads the CSV file of a VNNCOMP benchmark, parsing the network, property and timeout values.

Parameters:

Name Type Description Default benchmark str

The name of the benchmark directory.

required vnncomp_path Path

The path to the VNNCOMP benchmark directory

required predicate _InstancePredicate | Iterable[_InstancePredicate] | None

A function that, given a VerificationInstance returns either True or False. If False is returned, the VerificationInstance is dropped from the returned instances.

None as_smac bool

Return the instances as smac instance strings.

False

Returns:

Type Description list[VerificationInstance] | list[str]

list[VerificationInstance] | list[str]: A list of VerificationInstance or string objects that hold the network, property and timeout.

Source code in autoverify/util/instances.py
def read_vnncomp_instances(\nbenchmark: str,\nvnncomp_path: Path,\n*,\npredicate: _InstancePredicate | Iterable[_InstancePredicate] | None = None,\nas_smac: bool = False,\nresolve_paths: bool = True,\ninstance_file_name: str = \"instances.csv\",\n) -> list[VerificationInstance] | list[str]:\n\"\"\"Read the instances of a VNNCOMP benchmark.\n    Reads the CSV file of a VNNCOMP benchmark, parsing the network, property and\n    timeout values.\n    Args:\n        benchmark: The name of the benchmark directory.\n        vnncomp_path: The path to the VNNCOMP benchmark directory\n        predicate: A function that, given a `VerificationInstance` returns\n            either True or False. If False is returned, the\n            `VerificationInstance` is dropped from the returned instances.\n        as_smac: Return the instances as smac instance strings.\n    Returns:\n        list[VerificationInstance] | list[str]: A list of\n            `VerificationInstance` or string objects\n            that hold the network, property and timeout.\n    \"\"\"\nif not vnncomp_path.is_dir():\nraise ValueError(\"Could not find VNNCOMP directory\")\nbenchmark_dir = vnncomp_path / benchmark\nif not benchmark_dir.is_dir():\nraise ValueError(\nf\"{benchmark} is not a valid benchmark in {str(vnncomp_path)}\"\n)\ninstances = benchmark_dir / instance_file_name\nverification_instances = []\nif predicate and not isinstance(predicate, Iterable):\npredicate = [predicate]\nwith open(str(instances)) as csv_file:\nreader = csv.reader(csv_file)\nfor row in reader:\nnetwork, property, timeout = row\nnet_path = Path(str(benchmark_dir / network))\nprop_path = Path(str(benchmark_dir / property))\ninstance = VerificationInstance(\nnet_path if not resolve_paths else net_path.resolve(),\nprop_path if not resolve_paths else prop_path.resolve(),\nint(float(timeout)),  # FIXME: Timeouts can be floats\n)\nif predicate and not _passes_at_least_1(predicate, instance):\ncontinue\nif as_smac:\ninstance = instance.as_smac_instance()  # type: ignore\nverification_instances.append(instance)\nreturn verification_instances\n
"},{"location":"api/#autoverify.util.instances.unique_networks","title":"unique_networks(instances)","text":"

Utility function to keep only unique networks.

Source code in autoverify/util/instances.py
def unique_networks(\ninstances: Iterable[VerificationInstance],\n) -> list[VerificationInstance]:\n\"\"\"Utility function to keep only unique networks.\"\"\"\nunique = []\nseen: set[str] = set()\nfor instance in instances:\nif instance.network.name not in seen:\nunique.append(instance)\nseen.add(instance.network.name)\nreturn unique\n
"},{"location":"api/#autoverify.util.instances.verification_instances_to_smac_instances","title":"verification_instances_to_smac_instances(instances)","text":"

Convert a list of VerificationInstace objects to SMAC instances.

See the as_smac_instance docstring of the VerificationInstance class for more details.

Parameters:

Name Type Description Default instances Sequence[VerificationInstance]

The list of VerificationInstance objects to convert.

required

Returns:

Type Description list[str]

list[str]: The SMAC instance strings.

Source code in autoverify/util/instances.py
def verification_instances_to_smac_instances(\ninstances: Sequence[VerificationInstance],\n) -> list[str]:\n\"\"\"Convert a list of `VerificationInstace` objects to SMAC instances.\n    See the `as_smac_instance` docstring of the `VerificationInstance` class for\n    more details.\n    Args:\n        instances: The list of `VerificationInstance` objects to convert.\n    Returns:\n        list[str]: The SMAC instance strings.\n    \"\"\"\nreturn [inst.as_smac_instance() for inst in instances]\n
"},{"location":"api/#autoverify.util.instances.write_verification_results_to_csv","title":"write_verification_results_to_csv(results, csv_path)","text":"

Writes a verification results df to a csv.

Source code in autoverify/util/instances.py
def write_verification_results_to_csv(results: pd.DataFrame, csv_path: Path):\n\"\"\"Writes a verification results df to a csv.\"\"\"\nresults.to_csv(csv_path, index=False)\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance","title":"VerificationInstance dataclass","text":"

VerificationInstance class.

Attributes:

Name Type Description network Path

The Path to the network.

property Path

The Path to the property.

timeout int

Maximum wallclock time.

Source code in autoverify/util/verification_instance.py
@dataclass(frozen=True, eq=True)\nclass VerificationInstance:\n\"\"\"VerificationInstance class.\n    Attributes:\n        network: The `Path` to the network.\n        property: The `Path` to the property.\n        timeout: Maximum wallclock time.\n    \"\"\"\nnetwork: Path\nproperty: Path\ntimeout: int\n@classmethod\ndef from_str(cls, str_instance: str):\n\"\"\"Create from a comma seperated string.\"\"\"\nnetwork, property, timeout = str_instance.split(\",\")\nreturn cls(Path(network), Path(property), int(timeout))\ndef __hash__(self):\n\"\"\"Hash the VI.\"\"\"\nreturn hash(\n(\nself.network.expanduser().resolve(),\nself.property.expanduser().resolve(),\nself.timeout,\n)\n)\ndef __str__(self):\n\"\"\"Short string representation of the `VerificationInstance`.\"\"\"\nreturn f\"{self.network.name} :: {self.property.name} :: {self.timeout}\"\ndef as_smac_instance(self) -> str:\n\"\"\"Return the instance in a `f\"{network},{property},{timeout}\"` format.\n        A SMAC instance has to be passed as a single string to the\n        target_function, in which we split the instance string on the comma\n        again to obtain the network, property and timeout.\n        If no timeout is specified, the `DEFAULT_VERIFICATION_TIMEOUT_SEC`\n        global is used.\n        Returns:\n            str: The smac instance string\n        \"\"\"\ntimeout: int = self.timeout or DEFAULT_VERIFICATION_TIMEOUT_SEC\nreturn f\"{str(self.network)},{str(self.property)},{str(timeout)}\"\ndef as_row(self, resolve_paths: bool = True) -> list[str]:\n\"\"\"Returns the instance as a list of strings.\"\"\"\nnet = (\nstr(self.network.expanduser().resolve())\nif resolve_paths\nelse str(self.network)\n)\nprop = (\nstr(self.property.expanduser().resolve())\nif resolve_paths\nelse str(self.property)\n)\nreturn [net, prop, str(self.timeout)]\n# HACK: Assuming some names and layouts here...\ndef as_simplified_network(self) -> VerificationInstance:\n\"\"\"Changes the network path.\n        Assumes a \"onnx_simplified\" dir is present at the same level.\n        \"\"\"\nsimplified_nets_dir = self.network.parent.parent / \"onnx_simplified\"\nreturn VerificationInstance(\nsimplified_nets_dir / self.network.name, self.property, self.timeout\n)\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.__hash__","title":"__hash__()","text":"

Hash the VI.

Source code in autoverify/util/verification_instance.py
def __hash__(self):\n\"\"\"Hash the VI.\"\"\"\nreturn hash(\n(\nself.network.expanduser().resolve(),\nself.property.expanduser().resolve(),\nself.timeout,\n)\n)\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.__str__","title":"__str__()","text":"

Short string representation of the VerificationInstance.

Source code in autoverify/util/verification_instance.py
def __str__(self):\n\"\"\"Short string representation of the `VerificationInstance`.\"\"\"\nreturn f\"{self.network.name} :: {self.property.name} :: {self.timeout}\"\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.as_row","title":"as_row(resolve_paths=True)","text":"

Returns the instance as a list of strings.

Source code in autoverify/util/verification_instance.py
def as_row(self, resolve_paths: bool = True) -> list[str]:\n\"\"\"Returns the instance as a list of strings.\"\"\"\nnet = (\nstr(self.network.expanduser().resolve())\nif resolve_paths\nelse str(self.network)\n)\nprop = (\nstr(self.property.expanduser().resolve())\nif resolve_paths\nelse str(self.property)\n)\nreturn [net, prop, str(self.timeout)]\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.as_simplified_network","title":"as_simplified_network()","text":"

Changes the network path.

Assumes a \"onnx_simplified\" dir is present at the same level.

Source code in autoverify/util/verification_instance.py
def as_simplified_network(self) -> VerificationInstance:\n\"\"\"Changes the network path.\n    Assumes a \"onnx_simplified\" dir is present at the same level.\n    \"\"\"\nsimplified_nets_dir = self.network.parent.parent / \"onnx_simplified\"\nreturn VerificationInstance(\nsimplified_nets_dir / self.network.name, self.property, self.timeout\n)\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.as_smac_instance","title":"as_smac_instance()","text":"

Return the instance in a f\"{network},{property},{timeout}\" format.

A SMAC instance has to be passed as a single string to the target_function, in which we split the instance string on the comma again to obtain the network, property and timeout.

If no timeout is specified, the DEFAULT_VERIFICATION_TIMEOUT_SEC global is used.

Returns:

Name Type Description str str

The smac instance string

Source code in autoverify/util/verification_instance.py
def as_smac_instance(self) -> str:\n\"\"\"Return the instance in a `f\"{network},{property},{timeout}\"` format.\n    A SMAC instance has to be passed as a single string to the\n    target_function, in which we split the instance string on the comma\n    again to obtain the network, property and timeout.\n    If no timeout is specified, the `DEFAULT_VERIFICATION_TIMEOUT_SEC`\n    global is used.\n    Returns:\n        str: The smac instance string\n    \"\"\"\ntimeout: int = self.timeout or DEFAULT_VERIFICATION_TIMEOUT_SEC\nreturn f\"{str(self.network)},{str(self.property)},{str(timeout)}\"\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.from_str","title":"from_str(str_instance) classmethod","text":"

Create from a comma seperated string.

Source code in autoverify/util/verification_instance.py
@classmethod\ndef from_str(cls, str_instance: str):\n\"\"\"Create from a comma seperated string.\"\"\"\nnetwork, property, timeout = str_instance.split(\",\")\nreturn cls(Path(network), Path(property), int(timeout))\n
"},{"location":"api/#autoverify.util.vnncomp.inst_bench_to_kwargs","title":"inst_bench_to_kwargs(benchmark, verifier, instance)","text":"

Get the kwargs for a benchmark.

Source code in autoverify/util/vnncomp.py
def inst_bench_to_kwargs(\nbenchmark: str,\nverifier: str,\ninstance: VerificationInstance,\n) -> dict[str, Any]:\n\"\"\"Get the kwargs for a benchmark.\"\"\"\nif verifier == \"nnenum\":\nreturn {\"use_auto_settings\": True}\nelif verifier == \"abcrown\":  # TODO: All benchmarks\nif benchmark == \"acasxu\":\nreturn {\"yaml_override\": {\"data__num_outputs\": 5}}\nelif benchmark.startswith(\"sri_resnet_\"):\nreturn {\n\"yaml_override\": {\n\"model__onnx_quirks\": \"{'Reshape': {'fix_batch_size': True}}\"  # noqa: E501\n}\n}\nreturn {}\nelif verifier == \"ovalbab\":\nreturn {}\nelif verifier == \"verinet\":\nif benchmark == \"acasxu\":\nreturn {\"transpose_matmul_weights\": True}\nelif benchmark == \"cifar2020\":\nif instance.network.name.find(\"convBigRELU\") >= 0:\nreturn {\"dnnv_simplify\": True}\nelif benchmark == \"cifar100_tinyimagenet_resnet\":\nreturn {\"dnnv_simplify\": True}\nelif benchmark == \"nn4sys\":\nif instance.network.name == \"lindex.onnx\":\nreturn {\"dnnv_simplify\": True}\nreturn {}\nraise ValueError(\"Invalid verifier\")\n
"},{"location":"api/#autoverify.util.vnncomp.inst_bench_to_verifier","title":"inst_bench_to_verifier(benchmark, instance, verifier, allocation=None)","text":"

Get an instantiated verifier.

Source code in autoverify/util/vnncomp.py
def inst_bench_to_verifier(\nbenchmark: str,\ninstance: VerificationInstance,\nverifier: str,\nallocation: tuple[int, int, int] | None = None,\n) -> CompleteVerifier:\n\"\"\"Get an instantiated verifier.\"\"\"\nverifier_inst = verifier_from_name(verifier)(\n**inst_bench_to_kwargs(benchmark, verifier, instance),\ncpu_gpu_allocation=allocation,\n)\nassert isinstance(verifier_inst, CompleteVerifier)\nreturn verifier_inst\n
"},{"location":"api/#autoverify.util.vnncomp.inst_bench_verifier_config","title":"inst_bench_verifier_config(benchmark, instance, verifier, configs_dir)","text":"

Return the verifier and the VNNCOMP config.

Source code in autoverify/util/vnncomp.py
def inst_bench_verifier_config(\nbenchmark: str,\ninstance: VerificationInstance,\nverifier: str,\nconfigs_dir: Path,\n) -> Configuration | Path | None:\n\"\"\"Return the verifier and the VNNCOMP config.\"\"\"\nif benchmark == \"test_props\":\nreturn None\nif verifier == \"nnenum\":\nreturn None\nelif verifier == \"abcrown\":\ncfg_file = _get_abcrown_config(benchmark, instance)\nreturn Path(configs_dir / \"abcrown\" / cfg_file)\nelif verifier == \"ovalbab\":\ncfg = _get_ovalbab_config(benchmark)\nif cfg is not None:\nreturn Path(configs_dir / \"ovalbab\" / cfg)\nreturn cfg\nelif verifier == \"verinet\":\nreturn _get_verinet_config(benchmark)\nraise ValueError(\"Invalid verifier\")\n
"},{"location":"how-to-guides/","title":"How-To-Guides","text":""},{"location":"how-to-guides/#verifying-properties","title":"Verifying Properties","text":""},{"location":"how-to-guides/#simple-example","title":"Simple Example","text":"

After installing one or more verifiers, here is how to use them to verify a property. Networks should be in the ONNX format, properties in the VNNLIB format.

from pathlib import Path\nfrom result import Err, Ok\nfrom autoverify.verifier import AbCrown\nif __name__ == \"__main__\":\nverifier = AbCrown()\nnetwork = Path(\"my_network.onnx\")\nprop = Path(\"my_property.vnnlib\")\nresult = verifier.verify_property(network, prop)\nif isinstance(result, Ok):\noutcome = result.unwrap().result\nprint(\"Verification finished, result:\", outcome)\nelif isinstance(result, Err):\nprint(\"Error during verification:\")\nprint(result.unwrap_err().stdout)\n
"},{"location":"how-to-guides/#running-vnncomp-benchmarks","title":"Running VNNCOMP Benchmarks","text":"

Auto-Verify supports reading benchmarks defined in VNNCOMP style, which are benchmarks with the following structure:

vnncomp2022\n\u2514\u2500\u2500 test_props\n    \u251c\u2500\u2500 instances.csv\n    \u251c\u2500\u2500 onnx\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 test_nano.onnx\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 test_sat.onnx\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 test_unsat.onnx\n    \u2514\u2500\u2500 vnnlib\n        \u251c\u2500\u2500 test_nano.vnnlib\n        \u2514\u2500\u2500 test_prop.vnnlib\n

Where instances.csv is a csv file with 3 columns: network, property, timeout. For example, the test_props directory contains the following 3 verification instaces:

onnx/test_sat.onnx,vnnlib/test_prop.vnnlib,60\nonnx/test_unsat.onnx,vnnlib/test_prop.vnnlib,60\nonnx/test_nano.onnx,vnnlib/test_nano.vnnlib,60\n

VNNCOMP Benchmarks can found at the following links: 2022, 2023. Make sure to unzip all files inside the benchmark after you have downloaded it.

Below is a code snippet that runs this benchmark. Note the Path pointing to the benchmark location.

from pathlib import Path\nfrom result import Err, Ok\nfrom autoverify.util.instances import read_vnncomp_instances\nfrom autoverify.verifier import Nnenum\ntest_props = read_vnncomp_instances(\n\"test_props\", vnncomp_path=Path(\"../benchmark/vnncomp2022\")\n)\nif __name__ == \"__main__\":\nverifier = Nnenum()\nfor instance in test_props:\nprint(instance)\nresult = verifier.verify_instance(instance)\nif isinstance(result, Ok):\noutcome = result.unwrap().result\nprint(\"Verification finished, result:\", outcome)\nelif isinstance(result, Err):\nprint(\"Error during verification:\")\nprint(result.unwrap_err().stdout)\n
"},{"location":"how-to-guides/#algorithm-configuration","title":"Algorithm Configuration","text":"

Each of the verification tools comes equipped with a ConfigurationSpace, which can be used to sample Configuration for a verification tool. For example:

from autoverify.verifier import Nnenum\nif __name__ == \"__main__\":\nverifier = Nnenum()\nconfig = verifier.config_space.sample_configuration()\nprint(config)\n
"},{"location":"how-to-guides/#smac-example","title":"SMAC Example","text":"

We can apply algorithm configuration techniques (or hyperparameter optimization) using SMAC. In the example below, we try to find a configuration for AB-CROWN on the first 10 instances of the mnist_fc benchmark from VNNCOMP2022.

import sys\nfrom pathlib import Path\nfrom ConfigSpace import Configuration\nfrom result import Err, Ok\nfrom smac import AlgorithmConfigurationFacade, Scenario\nfrom autoverify.util.instances import (\nread_vnncomp_instances,\nverification_instances_to_smac_instances,\n)\nfrom autoverify.util.smac import index_features\nfrom autoverify.util.verification_instance import VerificationInstance\nfrom autoverify.verifier import AbCrown\nmnist = read_vnncomp_instances(\"mnist_fc\", vnncomp_path=Path(\"../benchmark/vnncomp2022\"))[:10]\nverifier = AbCrown()\ndef target_function(config: Configuration, instance: str, seed: int):\nseed += 1  # Mute unused var warnings; (cant rename to _)\nverif_inst = VerificationInstance.from_str(instance)\nresult = verifier.verify_instance(verif_inst, config=config)\n# SMAC handles exception by setting cost to `inf`\nverification_result = result.unwrap_or_raise(Exception)\nreturn float(verification_result.took)\nif __name__ == \"__main__\":\ncfg_space = verifier.config_space\nname = verifier.name\ninstances = verification_instances_to_smac_instances(mnist)\nscenario = Scenario(\ncfg_space,\nname=\"ab_tune\",\ninstances=instances,\ninstance_features=index_features(instances),\nwalltime_limit=600,\noutput_directory=Path(\"ab_tune_out\"),\nn_trials=sys.maxsize,  # Using walltime limit instead\n)\nsmac = AlgorithmConfigurationFacade(\nscenario, target_function, overwrite=True\n)\ninc = smac.optimize()\nprint(inc)\n

For more examples on how to use SMAC, please refer to the SMAC documentation.

"},{"location":"how-to-guides/#parallel-portfolios","title":"Parallel Portfolios","text":"

Note

Custom verification tools are not yet supported for parallel portfolios.

"},{"location":"how-to-guides/#constructiong-a-portfolio","title":"Constructiong a Portfolio","text":"

Portfolios can be constructed as shown below. In this example, we try construct a portfolio using the Hydra algorithm on the mnist_fc benchmark from VNNCOMP2022. We include four verification tools that are able to be included and give the procedure a walltime limit of 24 hours.

from pathlib import Path\nfrom autoverify.portfolio import Hydra, PortfolioScenario\nfrom autoverify.util.instances import read_vnncomp_instances\nbenchmark = read_vnncomp_instances(\"mnist_fc\", vnncomp_path=Path(\"../benchmark/vnncomp2022\"))\nif __name__ == \"__main__\":\npf_scenario = PortfolioScenario(\n[\"nnenum\", \"abcrown\", \"ovalbab\", \"verinet\"],\n[\n(\"nnenum\", 0, 0),\n(\"verinet\", 0, 1),\n(\"abcrown\", 0, 1),\n(\"ovalbab\", 0, 1),\n],\nbenchmark,\n4,\n(60 * 60 * 24) / 4,\nalpha=0.9,\noutput_dir=Path(\"PF_mnist_fc\"),\n)\nhydra = Hydra(pf_scenario)\npf = hydra.tune_portfolio()\npf.to_json(Path(\"mnist_fc_portfolio.json\"))\n
"},{"location":"how-to-guides/#running-a-portfolio","title":"Running a Portfolio","text":"

Portfolios can be read from a json or by specifying the verification tools in Python code. Below is an example of how to run a portfolio in parallel on some instances. Lets take the portfolio we created in the previous example and run it on the same benchmark.

from pathlib import Path\nfrom autoverify.portfolio import Portfolio, PortfolioRunner\nfrom autoverify.util.instances import read_vnncomp_instances\nbenchmark = read_vnncomp_instances(\"mnist_fc\", vnncomp_path=Path(\"../benchmark/vnncomp2022\"))\nif __name__ == \"__main__\":\nmnist_pf = Portfolio.from_json(Path(\"mnist_fc_portfolio.json\"))\npf_runner = PortfolioRunner(mnist_pf)\npf_runner.verify_instances(\nbenchmark,\nout_csv=Path(\"PF_mnist_fc_results.csv\"),\n)\n
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Auto-Verify","text":"

Auto-Verify is a tool that provides interfaces, parameter spaces and installation managers for different neural network verification tools.

"},{"location":"#getting-started","title":"Getting Started","text":"

Warning

Auto-Verify has only been tested for Linux and will not work on MacOS and Windows.

"},{"location":"#installing-auto-verify","title":"Installing Auto-Verify","text":"

First, install Miniconda. Miniconda is used to manage the environments of different verification tools, other environment managers will not work.

Warning

Anaconda can fail trying to install environments in some cases where Miniconda does not.

After Miniconda is installed, setup Auto-Verify by running the following commands:

> conda create -n auto-verify python=3.10\n> conda activate auto-verify\n> pip install auto-verify\n

To check if the installation was succesful, run:

> auto-verify --version\n
"},{"location":"#installing-verification-tools","title":"Installing Verification Tools","text":"

Currently, Auto-Verify supports the following verifiers:

  • nnenum (Stanley Bak)
  • AB-Crown (Zhang et al)
  • VeriNet (VAS Group)
  • Oval-BaB (OVAL Research Group)

These verifiers can be installed as follows:

> auto-verify install nnenum\n> auto-verify install abcrown\n> auto-verify install verinet\n> auto-verify install ovalbab\n

To uninstall a verifier, run:

> auto-verify uninstall [verifier]\n
"},{"location":"api/","title":"API","text":""},{"location":"api/#verifier","title":"Verifier","text":"

Base verifier class.

Classes for data about verification.

"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier","title":"CompleteVerifier","text":"

Bases: Verifier

Abstract class for complete verifiers.

Source code in autoverify/verifier/verifier.py
class CompleteVerifier(Verifier):\n\"\"\"Abstract class for complete verifiers.\"\"\"\ndef verify_property(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Configuration | Path | None = None,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> CompleteVerificationResult:\n\"\"\"Verify the property on the network.\n        Runs the verifier and feeds the network/property instance as input.\n        Complete verification will result in one of the following three\n        possibilities: `SAT`, `UNSAT`, `TIMEOUT`.\n        Args:\n            network: The `Path` to the network in `.onnx` format.\n            property: The `Path` to the property in `.vnnlib` format.\n            config: The configuration of the verification tool to be used. If\n                `None` is passed, the default configuration of the verification\n                tool will be used.\n            timeout: The maximum number of seconds that can be spent on the\n                verification query.\n        Returns:\n            CompleteVerificationResult: A `Result` object containing information\n                about the verification attempt. TODO: Link docs or something\n        \"\"\"\nnetwork, property = network.resolve(), property.resolve()\nself._check_instance(network, property)\nif config is None:\nconfig = self.default_config\n# Tools use different configuration formats and methods, so we let\n# them do some initialization here\nconfig = self._init_config(network, property, config)\nrun_cmd, output_file = self._get_run_cmd(\nnetwork, property, config=config, timeout=timeout\n)\noutcome = self._run_verification(\nrun_cmd,\nresult_file=output_file,\ntimeout=timeout,\n)\n# Shutting down after timeout may take some time, so we set the took\n# value to the actual timeout\nif outcome.result == \"TIMEOUT\":\noutcome.took = timeout\n# TODO: What is the point of wrapping in Ok/Err here\nreturn Ok(outcome) if outcome.result != \"ERR\" else Err(outcome)\ndef verify_instance(\nself,\ninstance: VerificationInstance,\n*,\nconfig: Configuration | Path | None = None,\n) -> CompleteVerificationResult:\n\"\"\"See the `verify_property` docstring.\"\"\"\nreturn self.verify_property(\ninstance.network,\ninstance.property,\ntimeout=instance.timeout,\nconfig=config,\n)\ndef verify_batch(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nconfig: Configuration | Path | None = None,\n) -> list[CompleteVerificationResult]:\n\"\"\"Verify a batch. Not yet implemented.\"\"\"\nfor instance in instances:\nself._check_instance(instance.network, instance.property)\nif config is None:\nconfig = self.default_config\nreturn self._verify_batch(\ninstances,\nconfig=config,\n)\n@abstractmethod\ndef _verify_batch(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\nraise NotImplementedError\ndef _allocate_run_cmd(\nself,\nrun_cmd: str,\ncontexts: list[ContextManager[None]],\n) -> str:\n# TODO: GPU allocation\nassert self._cpu_gpu_allocation is not None\ntaskset_cmd = taskset_cpu_range(self._cpu_gpu_allocation[0:2])\nlines = []\ngpu_dev = self._cpu_gpu_allocation[2]\ngpus = nvidia_gpu_count()\nif gpu_dev > gpus - 1:\nraise ValueError(\nf\"Asked for GPU {gpu_dev} (0-indexed), \"\nf\"but only found {gpus} GPU(s)\"\n)\nif gpu_dev >= 0:\ncontexts.append(environment(CUDA_VISIBLE_DEVICES=str(gpu_dev)))\nfor line in run_cmd.splitlines():\nline = line.lstrip()\nif len(line) == 0 or line.isspace():\ncontinue\n# HACK: Why does taskset not work with `source` and `conda`?\nif line.startswith(\"source\") or line.startswith(\"conda\"):\nlines.append(line)\nelse:\nlines.append(taskset_cmd + \" \" + line)\nreturn \"\\n\".join(lines)\ndef set_timeout_event(self):\n\"\"\"Signal that the process has timed out.\"\"\"\ntry:\nself._timeout_event.set()  # type: ignore\nexcept AttributeError:\npass\ndef _run_verification(\nself,\nrun_cmd: str,\n*,\nresult_file: Path | None = None,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> CompleteVerificationData:\ncontexts = self.contexts or []\noutput_lines: list[str] = []\nresult: str = \"\"\nif self._cpu_gpu_allocation is not None:\nrun_cmd = self._allocate_run_cmd(run_cmd, contexts)\nwith ExitStack() as stack:\nfor context in contexts:\nstack.enter_context(context)\nprocess = subprocess.Popen(\nrun_cmd,\nexecutable=\"/bin/bash\",\nstdout=subprocess.PIPE,\nstderr=subprocess.STDOUT,\nshell=True,\nuniversal_newlines=True,\npreexec_fn=os.setsid,\n)\nbefore_t = time.time()\nself._timeout_event: threading.Event | None = threading.Event()\ndef _terminate(timeout_sec):\nassert self._timeout_event\non_time = self._timeout_event.wait(timeout_sec)\nif not on_time:\nglobal result\nresult = \"TIMEOUT\"  # type: ignore\nif pid_exists(process.pid):\nos.killpg(os.getpgid(process.pid), signal.SIGTERM)\nt = threading.Thread(target=_terminate, args=[timeout])\nt.start()\nassert process.stdout\nfor line in iter(process.stdout.readline, \"\"):\noutput_lines.append(line)\nprocess.stdout.close()\nreturn_code = process.wait()\ntook_t = time.time() - before_t\nself._timeout_event.set()\noutput_str = \"\".join(output_lines)\ncounter_example: str | None = None\nif result != \"TIMEOUT\":\nif return_code > 0:\nresult = \"ERR\"\nelse:\nresult, counter_example = self._parse_result(\noutput_str, result_file\n)\nself._timeout_event = None\nreturn CompleteVerificationData(\nresult,  # type: ignore\ntook_t,\ncounter_example,\n\"\",  # TODO: Remove err field; its piped it to stdout\noutput_str,\n)\nraise RuntimeError(\"Exception during handling of verification\")\n
"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier.set_timeout_event","title":"set_timeout_event()","text":"

Signal that the process has timed out.

Source code in autoverify/verifier/verifier.py
def set_timeout_event(self):\n\"\"\"Signal that the process has timed out.\"\"\"\ntry:\nself._timeout_event.set()  # type: ignore\nexcept AttributeError:\npass\n
"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier.verify_batch","title":"verify_batch(instances, *, config=None)","text":"

Verify a batch. Not yet implemented.

Source code in autoverify/verifier/verifier.py
def verify_batch(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nconfig: Configuration | Path | None = None,\n) -> list[CompleteVerificationResult]:\n\"\"\"Verify a batch. Not yet implemented.\"\"\"\nfor instance in instances:\nself._check_instance(instance.network, instance.property)\nif config is None:\nconfig = self.default_config\nreturn self._verify_batch(\ninstances,\nconfig=config,\n)\n
"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier.verify_instance","title":"verify_instance(instance, *, config=None)","text":"

See the verify_property docstring.

Source code in autoverify/verifier/verifier.py
def verify_instance(\nself,\ninstance: VerificationInstance,\n*,\nconfig: Configuration | Path | None = None,\n) -> CompleteVerificationResult:\n\"\"\"See the `verify_property` docstring.\"\"\"\nreturn self.verify_property(\ninstance.network,\ninstance.property,\ntimeout=instance.timeout,\nconfig=config,\n)\n
"},{"location":"api/#autoverify.verifier.verifier.CompleteVerifier.verify_property","title":"verify_property(network, property, *, config=None, timeout=DEFAULT_VERIFICATION_TIMEOUT_SEC)","text":"

Verify the property on the network.

Runs the verifier and feeds the network/property instance as input. Complete verification will result in one of the following three possibilities: SAT, UNSAT, TIMEOUT.

Parameters:

Name Type Description Default network Path

The Path to the network in .onnx format.

required property Path

The Path to the property in .vnnlib format.

required config Configuration | Path | None

The configuration of the verification tool to be used. If None is passed, the default configuration of the verification tool will be used.

None timeout int

The maximum number of seconds that can be spent on the verification query.

DEFAULT_VERIFICATION_TIMEOUT_SEC

Returns:

Name Type Description CompleteVerificationResult CompleteVerificationResult

A Result object containing information about the verification attempt. TODO: Link docs or something

Source code in autoverify/verifier/verifier.py
def verify_property(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Configuration | Path | None = None,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> CompleteVerificationResult:\n\"\"\"Verify the property on the network.\n    Runs the verifier and feeds the network/property instance as input.\n    Complete verification will result in one of the following three\n    possibilities: `SAT`, `UNSAT`, `TIMEOUT`.\n    Args:\n        network: The `Path` to the network in `.onnx` format.\n        property: The `Path` to the property in `.vnnlib` format.\n        config: The configuration of the verification tool to be used. If\n            `None` is passed, the default configuration of the verification\n            tool will be used.\n        timeout: The maximum number of seconds that can be spent on the\n            verification query.\n    Returns:\n        CompleteVerificationResult: A `Result` object containing information\n            about the verification attempt. TODO: Link docs or something\n    \"\"\"\nnetwork, property = network.resolve(), property.resolve()\nself._check_instance(network, property)\nif config is None:\nconfig = self.default_config\n# Tools use different configuration formats and methods, so we let\n# them do some initialization here\nconfig = self._init_config(network, property, config)\nrun_cmd, output_file = self._get_run_cmd(\nnetwork, property, config=config, timeout=timeout\n)\noutcome = self._run_verification(\nrun_cmd,\nresult_file=output_file,\ntimeout=timeout,\n)\n# Shutting down after timeout may take some time, so we set the took\n# value to the actual timeout\nif outcome.result == \"TIMEOUT\":\noutcome.took = timeout\n# TODO: What is the point of wrapping in Ok/Err here\nreturn Ok(outcome) if outcome.result != \"ERR\" else Err(outcome)\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier","title":"Verifier","text":"

Bases: ABC

Abstract class to represent a verifier tool.

Source code in autoverify/verifier/verifier.py
class Verifier(ABC):\n\"\"\"Abstract class to represent a verifier tool.\"\"\"\n# TODO: GPU Mode attribute\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\n):\n\"\"\"New instance. This is used with super calls.\"\"\"\nself._batch_size = batch_size\nself._cpu_gpu_allocation = cpu_gpu_allocation\ndef get_init_attributes(self) -> dict[str, Any]:\n\"\"\"Get attributes provided during initialization of the verifier.\"\"\"\nreturn {\n\"batch_size\": self._batch_size,\n}\n@property\n@abstractmethod\ndef name(self) -> str:\n\"\"\"Unique verifier name.\"\"\"\nraise NotImplementedError\n@property\n@abstractmethod\ndef config_space(self) -> ConfigurationSpace:\n\"\"\"Configuration space to sample from.\"\"\"\nraise NotImplementedError\n@property\n@abstractmethod\ndef contexts(self) -> list[ContextManager[None]] | None:\n\"\"\"Contexts to run the verification in.\"\"\"\nraise NotImplementedError\n@property\ndef tool_path(self) -> Path:\n\"\"\"The path where the verifier is installed.\"\"\"\ntool_path = VERIFIER_DIR / self.name / TOOL_DIR_NAME\nif not tool_path.exists():\nraise FileNotFoundError(\nf\"Could not find installation for tool {self.name}\"\n)\nreturn Path(tool_path)  # mypy complains tool_path is any\n@property\ndef conda_env_name(self) -> str:\n\"\"\"The conda environment name associated with the verifier.\"\"\"\nreturn get_verifier_conda_env_name(self.name)\n@property\ndef conda_lib_path(self) -> Path:\nreturn get_conda_env_lib_path(self.conda_env_name)\n@property\ndef default_config(self) -> Configuration:\n\"\"\"Return the default configuration.\"\"\"\nreturn self.config_space.get_default_configuration()\n@abstractmethod\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Any,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\n\"\"\"Get the cli command to run the verification.\"\"\"\nraise NotImplementedError\n@abstractmethod\ndef _parse_result(\nself,\noutput: str,\nresult_file: Path | None,\n) -> tuple[VerificationResultString, str | None]:\n\"\"\"Parse the output to get the result.\"\"\"\nraise NotImplementedError\ndef _init_config(\nself,\nnetwork: Path,\nproperty: Path,\nconfig: Configuration | Path,\n) -> Any:\n\"\"\"Init the config, return type that is needed.\"\"\"\nreturn config\n# TODO: Overload like in ConfigSpace to distinguish between return types\ndef sample_configuration(\nself, *, size: int = 1\n) -> Configuration | list[Configuration]:\n\"\"\"Sample one or more configurations.\n        Args:\n            size: The number of configurations to sample.\n        Returns:\n            Configuration | list[Configuration]: The sampled configuration(s).\n        \"\"\"\nreturn self.config_space.sample_configuration(size=size)\n@staticmethod\ndef is_same_config(config1: Any, config2: Any) -> bool:\n\"\"\"Check if two configs are the same.\"\"\"\nraise NotImplementedError\n# TODO: Make this a function in util/\n@staticmethod\ndef _check_instance(network: Path, property: Path):\nif not check_file_extension(network, \".onnx\"):\nraise ValueError(\"Network should be in onnx format\")\nif not check_file_extension(property, \".vnnlib\"):\nraise ValueError(\"Property should be in vnnlib format\")\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier.conda_env_name","title":"conda_env_name: str property","text":"

The conda environment name associated with the verifier.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.config_space","title":"config_space: ConfigurationSpace abstractmethod property","text":"

Configuration space to sample from.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.contexts","title":"contexts: list[ContextManager[None]] | None abstractmethod property","text":"

Contexts to run the verification in.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.default_config","title":"default_config: Configuration property","text":"

Return the default configuration.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.name","title":"name: str abstractmethod property","text":"

Unique verifier name.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.tool_path","title":"tool_path: Path property","text":"

The path where the verifier is installed.

"},{"location":"api/#autoverify.verifier.verifier.Verifier.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None)","text":"

New instance. This is used with super calls.

Source code in autoverify/verifier/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\n):\n\"\"\"New instance. This is used with super calls.\"\"\"\nself._batch_size = batch_size\nself._cpu_gpu_allocation = cpu_gpu_allocation\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier.get_init_attributes","title":"get_init_attributes()","text":"

Get attributes provided during initialization of the verifier.

Source code in autoverify/verifier/verifier.py
def get_init_attributes(self) -> dict[str, Any]:\n\"\"\"Get attributes provided during initialization of the verifier.\"\"\"\nreturn {\n\"batch_size\": self._batch_size,\n}\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier.is_same_config","title":"is_same_config(config1, config2) staticmethod","text":"

Check if two configs are the same.

Source code in autoverify/verifier/verifier.py
@staticmethod\ndef is_same_config(config1: Any, config2: Any) -> bool:\n\"\"\"Check if two configs are the same.\"\"\"\nraise NotImplementedError\n
"},{"location":"api/#autoverify.verifier.verifier.Verifier.sample_configuration","title":"sample_configuration(*, size=1)","text":"

Sample one or more configurations.

Parameters:

Name Type Description Default size int

The number of configurations to sample.

1

Returns:

Type Description Configuration | list[Configuration]

Configuration | list[Configuration]: The sampled configuration(s).

Source code in autoverify/verifier/verifier.py
def sample_configuration(\nself, *, size: int = 1\n) -> Configuration | list[Configuration]:\n\"\"\"Sample one or more configurations.\n    Args:\n        size: The number of configurations to sample.\n    Returns:\n        Configuration | list[Configuration]: The sampled configuration(s).\n    \"\"\"\nreturn self.config_space.sample_configuration(size=size)\n
"},{"location":"api/#autoverify.verifier.verification_result.CompleteVerificationData","title":"CompleteVerificationData dataclass","text":"

Class holding data about a verification run.

Attributes:

Name Type Description result VerificationResultString

Outcome (e.g. SAT, UNSAT...)

took float

Walltime spent

counter_example str | None

Example that violates property (if SAT)

err str

stderr

stdout str

stdout

Source code in autoverify/verifier/verification_result.py
@dataclass\nclass CompleteVerificationData:\n\"\"\"Class holding data about a verification run.\n    Attributes:\n        result: Outcome (e.g. SAT, UNSAT...)\n        took: Walltime spent\n        counter_example: Example that violates property (if SAT)\n        err: stderr\n        stdout: stdout\n    \"\"\"\nresult: VerificationResultString\ntook: float\ncounter_example: str | None = None\nerr: str = \"\"\nstdout: str = \"\"\n
"},{"location":"api/#ab-crown","title":"AB-Crown","text":""},{"location":"api/#autoverify.verifier.complete.abcrown.AbCrown","title":"AbCrown","text":"

Bases: CompleteVerifier

AB-Crown.

Source code in autoverify/verifier/complete/abcrown/verifier.py
class AbCrown(CompleteVerifier):\n\"\"\"AB-Crown.\"\"\"\nname: str = \"abcrown\"\nconfig_space: ConfigurationSpace = AbCrownConfigspace\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\nyaml_override: dict[str, Any] | None = None,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"AB-Crown CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._yaml_override = yaml_override\n@property\ndef contexts(self) -> list[ContextManager[None]]:\n# TODO: Narrow the pkill_match_list patterns further. People may be\n# running scripts called 'abcrown.py'\n# Ideally just keep track of PIDs rather than pkill name matching\nreturn [\ncwd(self.tool_path / \"complete_verifier\"),\npkill_matches([\"python abcrown.py\"]),\nenvironment(\nLD_LIBRARY_PATH=str(\nfind_conda_lib(self.conda_env_name, \"libcudart.so.11.0\")\n)\n),\n]\ndef _parse_result(\nself,\noutput: str,\nresult_file: Path | None,\n) -> tuple[VerificationResultString, str | None]:\nif find_substring(\"Result: sat\", output):\nwith open(str(result_file), \"r\") as f:\ncounter_example = f.read()\nreturn \"SAT\", counter_example\nelif find_substring(\"Result: unsat\", output):\nreturn \"UNSAT\", None\nelif find_substring(\"Result: timeout\", output):\nreturn \"TIMEOUT\", None\nreturn \"TIMEOUT\", None\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Path,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\nresult_file = Path(tmp_file(\".txt\").name)\nsource_cmd = get_conda_source_cmd(get_conda_path())\nrun_cmd = f\"\"\"\n{\" \".join(source_cmd)}\n        conda activate {self.conda_env_name}\n        python abcrown.py --config {str(config)} \\\n        --results_file {str(result_file)} \\\n        --timeout {str(timeout)}\n        \"\"\"\nreturn run_cmd, result_file\ndef _verify_batch(\nself,\ninstances: Iterable[Any],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\n# source_cmd = get_conda_source_cmd()\n# TODO:\nraise NotImplementedError(\"Batch verification not supported yet\")\ndef _init_config(\nself,\nnetwork: Path,\nproperty: Path,\nconfig: Configuration | Path,\n) -> Path:\nif isinstance(config, Configuration):\nyaml_config = AbcrownYamlConfig.from_config(\nconfig,\nnetwork,\nproperty,\nbatch_size=self._batch_size,\nyaml_override=self._yaml_override,\n)\nelif isinstance(config, Path):\nyaml_config = AbcrownYamlConfig.from_yaml(\nconfig,\nnetwork,\nproperty,\nbatch_size=self._batch_size,\nyaml_override=self._yaml_override,\n)\nelse:\nraise ValueError(\"Config should be a Configuration or Path\")\nreturn Path(yaml_config.get_yaml_file_path())\n
"},{"location":"api/#autoverify.verifier.complete.abcrown.AbCrown.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None, yaml_override=None)","text":"

New instance.

Source code in autoverify/verifier/complete/abcrown/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\nyaml_override: dict[str, Any] | None = None,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"AB-Crown CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._yaml_override = yaml_override\n
"},{"location":"api/#nnenum","title":"nnenum","text":""},{"location":"api/#autoverify.verifier.complete.nnenum.Nnenum","title":"Nnenum","text":"

Bases: CompleteVerifier

Nnenum.

Source code in autoverify/verifier/complete/nnenum/verifier.py
class Nnenum(CompleteVerifier):\n\"\"\"Nnenum.\"\"\"\nname: str = \"nnenum\"\nconfig_space: ConfigurationSpace = NnenumConfigspace\n# HACK: Should not need to instantiate a whole new instance just to\n# change `_use_auto_settings`.\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\nuse_auto_settings: bool = True,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] >= 0:\nraise ValueError(\"Nnenum does not use a GPU, please set it to -1.\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._use_auto_settings = use_auto_settings\n@property\ndef contexts(self) -> list[ContextManager[None]]:\nreturn [\ncwd(self.tool_path / \"src\"),\nenvironment(OPENBLAS_NUM_THREADS=\"1\", OMP_NUM_THREADS=\"1\"),\npkill_matches([\"python -m nnenum.nnenum\"]),\n]\ndef _parse_result(\nself, _: str, result_file: Path | None\n) -> tuple[VerificationResultString, str | None]:\nwith open(str(result_file), \"r\") as f:\nresult_txt = f.read()\nfirst_line = result_txt.split(\"\\n\", maxsplit=1)[0]\nif first_line == \"unsat\":\nreturn \"UNSAT\", None\nelif first_line == \"sat\":\ncounter_example = self._parse_counter_example(result_txt)\nreturn \"SAT\", counter_example\nelif first_line == \"timeout\":\nreturn \"TIMEOUT\", None\nreturn \"TIMEOUT\", None\ndef _parse_counter_example(self, result_txt: str) -> str:\nreturn result_txt.split(\"\\n\", maxsplit=1)[1]\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: dict[str, Any],\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\nresult_file = Path(tmp_file(\".txt\").name)\nsource_cmd = get_conda_source_cmd(get_conda_path())\n# In nnenum, settings are normally passed as a one word string\n# over the CLI. This word then selects from some pre-defined settings.\n# We want some more control however, so we also make an option to pass\n# a stringified dict of exact settings.\n# The \"none\" value for settings_str is used as a flag that makes\n# nnenum use the dict of exact settings instead.\nsettings_str = \"none\"\nif self._use_auto_settings:\nsettings_str = \"auto\"  # \"auto\" is the default\nconfig = {}\nrun_cmd = f\"\"\"\n{\" \".join(source_cmd)}\n        conda activate {self.conda_env_name}\n        python -m nnenum.nnenum {str(network)} {str(property)} {str(timeout)} \\\n{str(result_file)} \\\n{str(cpu_count())} \\\n{settings_str} \\\n{shlex.quote(str(config))} \\\n        \"\"\"\nreturn run_cmd, result_file\ndef _verify_batch(\nself,\ninstances: Iterable[Any],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\n# source_cmd = get_conda_source_cmd()\n# TODO:\nraise NotImplementedError(\"Batch verification not supported yet\")\ndef _init_config(\nself,\nnetwork: Path,\nproperty: Path,\nconfig: Configuration | Path,\n) -> dict[str, Any]:\nif isinstance(config, Path):\nraise ValueError(\"Configuration files for nnenum not supported yet\")\nimport copy\ndict_config = copy.deepcopy(dict(config))\n# HACK: Cant safely evaluate `np.inf`, instead we pass it as a string\n# that is converted back to `np.inf` in the nnenum code.\nif dict_config[\"INF_OVERAPPROX_MIN_GEN_LIMIT\"] is True:\ndict_config[\"OVERAPPROX_MIN_GEN_LIMIT\"] = \"_inf\"\nif dict_config[\"INF_OVERAPPROX_LP_TIMEOUT\"] is True:\ndict_config[\"OVERAPPROX_LP_TIMEOUT\"] = \"_inf\"\ndel dict_config[\"INF_OVERAPPROX_LP_TIMEOUT\"]\ndel dict_config[\"INF_OVERAPPROX_MIN_GEN_LIMIT\"]\nreturn dict_config\n@staticmethod\ndef is_same_config(\nconfig1: Configuration | str, config2: Configuration | str\n) -> bool:\n\"\"\"Return if two configs are equivalent.\"\"\"\nif isinstance(config1, Configuration):\nconfig1 = str(config1[\"settings_mode\"])  # type: ignore\nif isinstance(config2, Configuration):\nconfig2 = str(config2[\"settings_mode\"])  # type: ignore\nreturn config1 == config2\n
"},{"location":"api/#autoverify.verifier.complete.nnenum.Nnenum.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None, use_auto_settings=True)","text":"

New instance.

Source code in autoverify/verifier/complete/nnenum/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\nuse_auto_settings: bool = True,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] >= 0:\nraise ValueError(\"Nnenum does not use a GPU, please set it to -1.\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._use_auto_settings = use_auto_settings\n
"},{"location":"api/#autoverify.verifier.complete.nnenum.Nnenum.is_same_config","title":"is_same_config(config1, config2) staticmethod","text":"

Return if two configs are equivalent.

Source code in autoverify/verifier/complete/nnenum/verifier.py
@staticmethod\ndef is_same_config(\nconfig1: Configuration | str, config2: Configuration | str\n) -> bool:\n\"\"\"Return if two configs are equivalent.\"\"\"\nif isinstance(config1, Configuration):\nconfig1 = str(config1[\"settings_mode\"])  # type: ignore\nif isinstance(config2, Configuration):\nconfig2 = str(config2[\"settings_mode\"])  # type: ignore\nreturn config1 == config2\n
"},{"location":"api/#oval-bab","title":"Oval-BaB","text":""},{"location":"api/#autoverify.verifier.complete.ovalbab.OvalBab","title":"OvalBab","text":"

Bases: CompleteVerifier

Oval-BaB.

Source code in autoverify/verifier/complete/ovalbab/verifier.py
class OvalBab(CompleteVerifier):\n\"\"\"Oval-BaB.\"\"\"\nname: str = \"ovalbab\"\nconfig_space: ConfigurationSpace = OvalBabConfigspace\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"Oval-BaB CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\n@property\ndef contexts(self) -> list[ContextManager[None]]:\nreturn [\ncwd(self.tool_path),\nenvironment(\nLD_LIBRARY_PATH=str(\nfind_conda_lib(self.conda_env_name, \"libcudart.so.11.0\")\n)\n),\npkill_matches([\"python tools/bab_tools/bab_from_vnnlib.py\"]),\n]\ndef _parse_result(\nself,\n_: str,\nresult_file: Path | None,\n) -> tuple[VerificationResultString, str | None]:\nwith open(str(result_file), \"r\") as f:\nresult_text = f.read()\nif find_substring(\"violated\", result_text):\n# TODO: Counterexample (not sure if its saved at all by ovalbab?)\nreturn \"SAT\", None\nelif find_substring(\"holds\", result_text):\nreturn \"UNSAT\", None\nreturn \"TIMEOUT\", None\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Path,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\nresult_file = Path(tmp_file(\".txt\").name)\nsource_cmd = get_conda_source_cmd(get_conda_path())\nrun_cmd = f\"\"\"\n{\" \".join(source_cmd)}\n        conda activate {self.conda_env_name}\n        python tools/bab_tools/bab_from_vnnlib.py --mode run_instance \\\n        --onnx {str(network)} \\\n        --vnnlib {str(property)} \\\n        --result_file {str(result_file)} \\\n        --json {str(config)} \\\n        --instance_timeout {timeout}\n        \"\"\"\nreturn run_cmd, result_file\ndef _init_config(\nself,\nnetwork: Path,\nproperty: Path,\nconfig: Configuration | Path,\n) -> Path:\nif isinstance(config, Configuration):\njson_config = OvalbabJsonConfig.from_config(config)\nelif isinstance(config, Path):\njson_config = OvalbabJsonConfig.from_json(config)\nelse:\nraise ValueError(\"Config should be a Configuration, Path or None\")\nreturn Path(json_config.get_json_file_path())\ndef _verify_batch(\nself,\ninstances: Iterable[Any],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\n# source_cmd = get_conda_source_cmd()\n# TODO:\nraise NotImplementedError(\"Batch verification not supported yet\")\n
"},{"location":"api/#autoverify.verifier.complete.ovalbab.OvalBab.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None)","text":"

New instance.

Source code in autoverify/verifier/complete/ovalbab/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"Oval-BaB CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\n
"},{"location":"api/#verinet","title":"VeriNet","text":""},{"location":"api/#autoverify.verifier.complete.verinet.Verinet","title":"Verinet","text":"

Bases: CompleteVerifier

VeriNet.

Source code in autoverify/verifier/complete/verinet/verifier.py
class Verinet(CompleteVerifier):\n\"\"\"VeriNet.\"\"\"\nname: str = \"verinet\"\nconfig_space: ConfigurationSpace = VerinetConfigspace\n# HACK: Quick hack to add some attributes to a VeriNet instance.\n# Ideally, these could be passed when calling `verify_property/instance`\n# or inside the Configuration.\ndef __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\ngpu_mode: bool = True,\ninput_shape: list[int] | None = None,\ndnnv_simplify: bool = False,\ntranspose_matmul_weights: bool = False,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"VeriNet CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._gpu_mode = gpu_mode\nself._input_shape = input_shape\nself._dnnv_simplify = dnnv_simplify\nself._transpose_matmul_weights = transpose_matmul_weights\n@property\ndef contexts(self) -> list[ContextManager[None]]:\nreturn [\ncwd(self.tool_path),\nenvironment(\nOPENBLAS_NUM_THREADS=\"1\",\nOMP_NUM_THREADS=\"1\",\nLD_LIBRARY_PATH=str(\nfind_conda_lib(self.conda_env_name, \"libcudart.so.11.0\")\n),\n),\npkill_matches(\n[\n\"multiprocessing.spawn\",\n\"multiprocessing.forkserver\",\n\"python cli.py\",  # TODO: Too broad\n]\n),\n]\ndef _parse_result(\nself,\noutput: str,\n_: Path | None,\n) -> tuple[VerificationResultString, str | None]:\nif find_substring(\"STATUS:  Status.Safe\", output):\nreturn \"UNSAT\", None\nelif find_substring(\"STATUS:  Status.Unsafe\", output):\nreturn \"SAT\", None\nelif find_substring(\"STATUS:  Status.Undecided\", output):\nreturn \"TIMEOUT\", None\nreturn \"TIMEOUT\", None\ndef _get_run_cmd(\nself,\nnetwork: Path,\nproperty: Path,\n*,\nconfig: Configuration,\ntimeout: int = DEFAULT_VERIFICATION_TIMEOUT_SEC,\n) -> tuple[str, Path | None]:\nsource_cmd = get_conda_source_cmd(get_conda_path())\ninput_shape = self._input_shape or get_input_shape(network)\n# params in order:\n# network, prop, timeout, config, input_shape, max_procs, gpu_mode,\n# dnnv_simplify\nrun_cmd = f\"\"\"\n{\" \".join(source_cmd)}\n        conda activate {self.conda_env_name}\n        python cli.py {str(network)} {str(property)} {timeout} \\\n{shlex.quote(str(dict(config)))} \\\n{shlex.quote(str(input_shape))} \\\n{-1} \\\n{self._gpu_mode} \\\n{self._dnnv_simplify} \\\n{self._transpose_matmul_weights} \\\n        \"\"\"\nreturn run_cmd, None\ndef _verify_batch(\nself,\ninstances: Iterable[Any],\n*,\nconfig: Configuration | Path | None,\n) -> list[CompleteVerificationResult]:\n# source_cmd = get_conda_source_cmd()\n# TODO:\nraise NotImplementedError(\"Batch verification not supported yet\")\n
"},{"location":"api/#autoverify.verifier.complete.verinet.Verinet.__init__","title":"__init__(batch_size=512, cpu_gpu_allocation=None, gpu_mode=True, input_shape=None, dnnv_simplify=False, transpose_matmul_weights=False)","text":"

New instance.

Source code in autoverify/verifier/complete/verinet/verifier.py
def __init__(\nself,\nbatch_size: int = 512,\ncpu_gpu_allocation: tuple[int, int, int] | None = None,\ngpu_mode: bool = True,\ninput_shape: list[int] | None = None,\ndnnv_simplify: bool = False,\ntranspose_matmul_weights: bool = False,\n):\n\"\"\"New instance.\"\"\"\nif cpu_gpu_allocation and cpu_gpu_allocation[2] < 0:\nraise ValueError(\"VeriNet CPU only mode not yet supported\")\nsuper().__init__(batch_size, cpu_gpu_allocation)\nself._gpu_mode = gpu_mode\nself._input_shape = input_shape\nself._dnnv_simplify = dnnv_simplify\nself._transpose_matmul_weights = transpose_matmul_weights\n
"},{"location":"api/#portfolio","title":"Portfolio","text":"

Parallel portfolio.

Class to run parallel portfolio.

"},{"location":"api/#autoverify.portfolio.portfolio.ConfiguredVerifier","title":"ConfiguredVerifier dataclass","text":"

Class representing an interal configured verifier.

Attributes:

Name Type Description verifier str

Name of the verifier.

configuration Configuration

Its configuration.

resources tuple[int, int] | None

Number of cores and GPUs.

Source code in autoverify/portfolio/portfolio.py
@dataclass(frozen=True, eq=True, repr=True)\nclass ConfiguredVerifier:\n\"\"\"Class representing an interal configured verifier.\n    Attributes:\n        verifier: Name of the verifier.\n        configuration: Its configuration.\n        resources: Number of cores and GPUs.\n    \"\"\"\nverifier: str\nconfiguration: Configuration\nresources: tuple[int, int] | None = None\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio","title":"Portfolio","text":"

Bases: MutableSet[ConfiguredVerifier]

Portfolio of ConfiguredVerifiers.

Source code in autoverify/portfolio/portfolio.py
class Portfolio(MutableSet[ConfiguredVerifier]):\n\"\"\"Portfolio of ConfiguredVerifiers.\"\"\"\ndef __init__(self, *cvs: ConfiguredVerifier):\n\"\"\"Initialize a new PF with the passed verifiers.\"\"\"\nself._pf_set: set[ConfiguredVerifier] = set(cvs)\nself._costs: dict[str, float] = {}\ndef __contains__(self, cv: object):\n\"\"\"Check if a CV is in the PF.\"\"\"\n# cant type annotate the func arg or mypy gets mad\nassert isinstance(cv, ConfiguredVerifier)\nreturn cv in self._pf_set\ndef __iter__(self):\n\"\"\"Iterate the contents of the PF.\"\"\"\nreturn iter(self._pf_set)\ndef __len__(self):\n\"\"\"Number of CVs in the PF.\"\"\"\nreturn len(self._pf_set)\ndef __str__(self):\n\"\"\"String representation of the PF.\"\"\"\nres = \"\"\nfor cv in self:\nres += str(cv) + \"\\n\"\nreturn res\ndef get_set(self):\n\"\"\"Get the underlying set.\"\"\"\nreturn self._pf_set\n@property\ndef configs(self) -> list[Configuration]:\n\"\"\"All the configurations in the PF.\"\"\"\nconfigs = []\nfor cv in self._pf_set:\nconfigs.append(cv.configuration)\nreturn configs\ndef get_cost(self, instance: str):\n\"\"\"Get the currently known costs of an instance.\"\"\"\nreturn self._costs[instance]\ndef get_costs(self, instances: Iterable[str]) -> dict[str, float]:\n\"\"\"Get costs of more than one instance.\"\"\"\ncosts: dict[str, float] = {}\nfor inst in instances:\nif inst in self._costs:\ncosts[inst] = self._costs[inst]\nreturn costs\ndef get_all_costs(self) -> dict[str, float]:\n\"\"\"All the recorded costs.\"\"\"\nreturn self._costs\ndef get_mean_cost(self) -> float:\n\"\"\"Get the mean cost.\"\"\"\nreturn float(np.mean(list(self._costs.values())))\ndef get_median_cost(self) -> float:\n\"\"\"Get the median cost.\"\"\"\nreturn float(np.median(list(self._costs.values())))\ndef get_total_cost(self) -> float:\n\"\"\"Get the total cost.\"\"\"\nreturn float(np.sum(list(self._costs.values())))\ndef update_costs(self, costs: Mapping[str, float]):\n\"\"\"Upate the costs based on the new costs mapping.\"\"\"\nfor instance, cost in costs.items():\nif instance not in self._costs:\nself._costs[instance] = cost\ncontinue\nself._costs[instance] = min(self._costs[instance], cost)\ndef add(self, cv: ConfiguredVerifier):\n\"\"\"Add a CV to the PF, no duplicates allowed.\"\"\"\nif cv in self._pf_set:\nraise ValueError(f\"{cv} is already in the portfolio\")\nself._pf_set.add(cv)\ndef discard(self, cv: ConfiguredVerifier):\n\"\"\"Remove a CV from the PF.\"\"\"\nif cv not in self._pf_set:\nraise ValueError(f\"{cv} is not in the portfolio\")\nself._pf_set.discard(cv)\ndef reallocate_resources(\nself, strategy: ResourceStrategy = ResourceStrategy.Auto\n):\n\"\"\"Realloacte based on current contents and given strategy.\"\"\"\nif strategy != ResourceStrategy.Auto:\nraise NotImplementedError(\n\"Given `ResourceStrategy` is not supported yet.\"\n)\n# NOTE: Should put this alloc stuff in a function\nn_cores = cpu_count()\ncores_per = n_cores // len(self)\ncores_remainder = n_cores % len(self)\nfor cv in self:\nverifier = cv.verifier\nresources = cv.resources\nconfig = cv.configuration\nextra_core = 0\nif cores_remainder > 0:\nextra_core = 1\ncores_remainder -= 1\nnew_resources = (\n(cores_per + extra_core, resources[1]) if resources else None\n)\nself.discard(cv)\nself.add(ConfiguredVerifier(verifier, config, new_resources))\ndef to_json(self, out_json_path: Path):\n\"\"\"Write the portfolio in JSON format to the specified path.\"\"\"\njson_list: list[dict[str, Any]] = []\nfor cv in self._pf_set:\ncfg_dict = dict(cv.configuration)\nto_write = {\n\"verifier\": cv.verifier,\n\"configuration\": cfg_dict,\n\"resources\": cv.resources,\n}\njson_list.append(to_write)\nwith open(out_json_path, \"w\") as f:\njson.dump(json_list, f, indent=4, default=str)\n@classmethod\ndef from_json(\ncls,\njson_file: Path,\nconfig_space_map: Mapping[str, ConfigurationSpace] | None = None,\n) -> Portfolio:\n\"\"\"Instantiate a new Portfolio class from a JSON file.\"\"\"\nwith open(json_file.expanduser().resolve()) as f:\npf_json = json.load(f)\npf = Portfolio()\nfor cv in pf_json:\nif config_space_map is None:\ncfg_space = get_verifier_configspace(cv[\"verifier\"])\nelse:\ncfg_space = config_space_map[cv[\"verifier\"]]\ncv[\"configuration\"] = Configuration(cfg_space, cv[\"configuration\"])\nif cv[\"resources\"]:\ncv[\"resources\"] = tuple(cv[\"resources\"])\npf.add(ConfiguredVerifier(**cv))\nreturn pf\ndef str_compact(self) -> str:\n\"\"\"Get the portfolio in string form in a compact way.\"\"\"\ncvs: list[str] = []\nfor cv in self:\ncvs.append(\n\"\\t\".join(\n[\nstr(cv.verifier),\nstr(hash(cv.configuration)),\nstr(cv.resources),\n]\n)\n)\nreturn \"\\n\".join(cvs)\ndef dump_costs(self):\n\"\"\"Print the costs for each instance.\"\"\"\nfor instance, cost in self._costs.items():\nprint(instance, cost)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.configs","title":"configs: list[Configuration] property","text":"

All the configurations in the PF.

"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__contains__","title":"__contains__(cv)","text":"

Check if a CV is in the PF.

Source code in autoverify/portfolio/portfolio.py
def __contains__(self, cv: object):\n\"\"\"Check if a CV is in the PF.\"\"\"\n# cant type annotate the func arg or mypy gets mad\nassert isinstance(cv, ConfiguredVerifier)\nreturn cv in self._pf_set\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__init__","title":"__init__(*cvs)","text":"

Initialize a new PF with the passed verifiers.

Source code in autoverify/portfolio/portfolio.py
def __init__(self, *cvs: ConfiguredVerifier):\n\"\"\"Initialize a new PF with the passed verifiers.\"\"\"\nself._pf_set: set[ConfiguredVerifier] = set(cvs)\nself._costs: dict[str, float] = {}\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__iter__","title":"__iter__()","text":"

Iterate the contents of the PF.

Source code in autoverify/portfolio/portfolio.py
def __iter__(self):\n\"\"\"Iterate the contents of the PF.\"\"\"\nreturn iter(self._pf_set)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__len__","title":"__len__()","text":"

Number of CVs in the PF.

Source code in autoverify/portfolio/portfolio.py
def __len__(self):\n\"\"\"Number of CVs in the PF.\"\"\"\nreturn len(self._pf_set)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.__str__","title":"__str__()","text":"

String representation of the PF.

Source code in autoverify/portfolio/portfolio.py
def __str__(self):\n\"\"\"String representation of the PF.\"\"\"\nres = \"\"\nfor cv in self:\nres += str(cv) + \"\\n\"\nreturn res\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.add","title":"add(cv)","text":"

Add a CV to the PF, no duplicates allowed.

Source code in autoverify/portfolio/portfolio.py
def add(self, cv: ConfiguredVerifier):\n\"\"\"Add a CV to the PF, no duplicates allowed.\"\"\"\nif cv in self._pf_set:\nraise ValueError(f\"{cv} is already in the portfolio\")\nself._pf_set.add(cv)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.discard","title":"discard(cv)","text":"

Remove a CV from the PF.

Source code in autoverify/portfolio/portfolio.py
def discard(self, cv: ConfiguredVerifier):\n\"\"\"Remove a CV from the PF.\"\"\"\nif cv not in self._pf_set:\nraise ValueError(f\"{cv} is not in the portfolio\")\nself._pf_set.discard(cv)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.dump_costs","title":"dump_costs()","text":"

Print the costs for each instance.

Source code in autoverify/portfolio/portfolio.py
def dump_costs(self):\n\"\"\"Print the costs for each instance.\"\"\"\nfor instance, cost in self._costs.items():\nprint(instance, cost)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.from_json","title":"from_json(json_file, config_space_map=None) classmethod","text":"

Instantiate a new Portfolio class from a JSON file.

Source code in autoverify/portfolio/portfolio.py
@classmethod\ndef from_json(\ncls,\njson_file: Path,\nconfig_space_map: Mapping[str, ConfigurationSpace] | None = None,\n) -> Portfolio:\n\"\"\"Instantiate a new Portfolio class from a JSON file.\"\"\"\nwith open(json_file.expanduser().resolve()) as f:\npf_json = json.load(f)\npf = Portfolio()\nfor cv in pf_json:\nif config_space_map is None:\ncfg_space = get_verifier_configspace(cv[\"verifier\"])\nelse:\ncfg_space = config_space_map[cv[\"verifier\"]]\ncv[\"configuration\"] = Configuration(cfg_space, cv[\"configuration\"])\nif cv[\"resources\"]:\ncv[\"resources\"] = tuple(cv[\"resources\"])\npf.add(ConfiguredVerifier(**cv))\nreturn pf\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_all_costs","title":"get_all_costs()","text":"

All the recorded costs.

Source code in autoverify/portfolio/portfolio.py
def get_all_costs(self) -> dict[str, float]:\n\"\"\"All the recorded costs.\"\"\"\nreturn self._costs\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_cost","title":"get_cost(instance)","text":"

Get the currently known costs of an instance.

Source code in autoverify/portfolio/portfolio.py
def get_cost(self, instance: str):\n\"\"\"Get the currently known costs of an instance.\"\"\"\nreturn self._costs[instance]\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_costs","title":"get_costs(instances)","text":"

Get costs of more than one instance.

Source code in autoverify/portfolio/portfolio.py
def get_costs(self, instances: Iterable[str]) -> dict[str, float]:\n\"\"\"Get costs of more than one instance.\"\"\"\ncosts: dict[str, float] = {}\nfor inst in instances:\nif inst in self._costs:\ncosts[inst] = self._costs[inst]\nreturn costs\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_mean_cost","title":"get_mean_cost()","text":"

Get the mean cost.

Source code in autoverify/portfolio/portfolio.py
def get_mean_cost(self) -> float:\n\"\"\"Get the mean cost.\"\"\"\nreturn float(np.mean(list(self._costs.values())))\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_median_cost","title":"get_median_cost()","text":"

Get the median cost.

Source code in autoverify/portfolio/portfolio.py
def get_median_cost(self) -> float:\n\"\"\"Get the median cost.\"\"\"\nreturn float(np.median(list(self._costs.values())))\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_set","title":"get_set()","text":"

Get the underlying set.

Source code in autoverify/portfolio/portfolio.py
def get_set(self):\n\"\"\"Get the underlying set.\"\"\"\nreturn self._pf_set\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.get_total_cost","title":"get_total_cost()","text":"

Get the total cost.

Source code in autoverify/portfolio/portfolio.py
def get_total_cost(self) -> float:\n\"\"\"Get the total cost.\"\"\"\nreturn float(np.sum(list(self._costs.values())))\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.reallocate_resources","title":"reallocate_resources(strategy=ResourceStrategy.Auto)","text":"

Realloacte based on current contents and given strategy.

Source code in autoverify/portfolio/portfolio.py
def reallocate_resources(\nself, strategy: ResourceStrategy = ResourceStrategy.Auto\n):\n\"\"\"Realloacte based on current contents and given strategy.\"\"\"\nif strategy != ResourceStrategy.Auto:\nraise NotImplementedError(\n\"Given `ResourceStrategy` is not supported yet.\"\n)\n# NOTE: Should put this alloc stuff in a function\nn_cores = cpu_count()\ncores_per = n_cores // len(self)\ncores_remainder = n_cores % len(self)\nfor cv in self:\nverifier = cv.verifier\nresources = cv.resources\nconfig = cv.configuration\nextra_core = 0\nif cores_remainder > 0:\nextra_core = 1\ncores_remainder -= 1\nnew_resources = (\n(cores_per + extra_core, resources[1]) if resources else None\n)\nself.discard(cv)\nself.add(ConfiguredVerifier(verifier, config, new_resources))\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.str_compact","title":"str_compact()","text":"

Get the portfolio in string form in a compact way.

Source code in autoverify/portfolio/portfolio.py
def str_compact(self) -> str:\n\"\"\"Get the portfolio in string form in a compact way.\"\"\"\ncvs: list[str] = []\nfor cv in self:\ncvs.append(\n\"\\t\".join(\n[\nstr(cv.verifier),\nstr(hash(cv.configuration)),\nstr(cv.resources),\n]\n)\n)\nreturn \"\\n\".join(cvs)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.to_json","title":"to_json(out_json_path)","text":"

Write the portfolio in JSON format to the specified path.

Source code in autoverify/portfolio/portfolio.py
def to_json(self, out_json_path: Path):\n\"\"\"Write the portfolio in JSON format to the specified path.\"\"\"\njson_list: list[dict[str, Any]] = []\nfor cv in self._pf_set:\ncfg_dict = dict(cv.configuration)\nto_write = {\n\"verifier\": cv.verifier,\n\"configuration\": cfg_dict,\n\"resources\": cv.resources,\n}\njson_list.append(to_write)\nwith open(out_json_path, \"w\") as f:\njson.dump(json_list, f, indent=4, default=str)\n
"},{"location":"api/#autoverify.portfolio.portfolio.Portfolio.update_costs","title":"update_costs(costs)","text":"

Upate the costs based on the new costs mapping.

Source code in autoverify/portfolio/portfolio.py
def update_costs(self, costs: Mapping[str, float]):\n\"\"\"Upate the costs based on the new costs mapping.\"\"\"\nfor instance, cost in costs.items():\nif instance not in self._costs:\nself._costs[instance] = cost\ncontinue\nself._costs[instance] = min(self._costs[instance], cost)\n
"},{"location":"api/#autoverify.portfolio.portfolio.PortfolioScenario","title":"PortfolioScenario dataclass","text":"

Scenario for constructing a parallel portfolio.

Attributes:

Name Type Description verifiers Sequence[str]

The name of the verifiers to consider.

resources list[tuple[str, int, int]]

How many cores and GPUs the verifiers need.

instances Sequence[VerificationInstance]

The instances on which the PF is constructed.

length int

The max length of the PF.

seconds_per_iter float

Number of seconds for each Hydra iteration.

configs_per_iter int

Number of configs each iteration.

alpha float

Tune/Pick time split.

added_per_iter int

Entries added to the PF per iter.

stop_early bool

Stop procedure if some early stop conditions are met.

resource_strategy ResourceStrategy

Strat to divide the resources.

output_dir Path | None

Dir where logs are stored.

vnn_compat_mode bool

Use vnn compatability for some verifiers.

benchmark str | None

VNNCOMP benchmark if vnn_compat_mode is True.

verifier_kwargs dict[str, dict[str, Any]] | None

Kwargs passed to verifiers.

uses_simplified_network Iterable[str] | None

If the network uses the dnnv simplified nets.

Source code in autoverify/portfolio/portfolio.py
@dataclass\nclass PortfolioScenario:\n\"\"\"Scenario for constructing a parallel portfolio.\n    Attributes:\n        verifiers: The name of the verifiers to consider.\n        resources: How many cores and GPUs the verifiers need.\n        instances: The instances on which the PF is constructed.\n        length: The max length of the PF.\n        seconds_per_iter: Number of seconds for each Hydra iteration.\n        configs_per_iter: Number of configs each iteration.\n        alpha: Tune/Pick time split.\n        added_per_iter: Entries added to the PF per iter.\n        stop_early: Stop procedure if some early stop conditions are met.\n        resource_strategy: Strat to divide the resources.\n        output_dir: Dir where logs are stored.\n        vnn_compat_mode: Use vnn compatability for some verifiers.\n        benchmark: VNNCOMP benchmark if vnn_compat_mode is `True`.\n        verifier_kwargs: Kwargs passed to verifiers.\n        uses_simplified_network: If the network uses the dnnv simplified nets.\n    \"\"\"\nverifiers: Sequence[str]\nresources: list[tuple[str, int, int]]\ninstances: Sequence[VerificationInstance]\nlength: int  # TODO: Rename to max_length?\nseconds_per_iter: float\n# Optional\nconfigs_per_iter: int = 1\nalpha: float = 0.5  # tune/pick split\nadded_per_iter: int = 1\nstop_early: bool = True\nresource_strategy: ResourceStrategy = ResourceStrategy.Auto\noutput_dir: Path | None = None\nvnn_compat_mode: bool = False\nbenchmark: str | None = None\nverifier_kwargs: dict[str, dict[str, Any]] | None = None\nuses_simplified_network: Iterable[str] | None = None\ndef __post_init__(self):\n\"\"\"Validate the PF scenario.\"\"\"\nif self.added_per_iter > 1 or self.configs_per_iter > 1:\nraise ValueError(\n\"Adding more than 1 config per iter not supported yet.\"\n)\nif not 0 <= self.alpha <= 1:\nraise ValueError(f\"Alpha should be in [0.0, 1.0], got {self.alpha}\")\nif self.output_dir is None:\ncurrent_time = datetime.datetime.now()\nformatted_time = current_time.strftime(\"%Y-%m-%d %H:%M:%S\")\nself.output_dir = Path(f\"hydra_out/{formatted_time}\")\nself.tune_budget = self.alpha\nself.pick_budget = 1 - self.alpha\nif self.added_per_iter > self.length:\nraise ValueError(\"Entries added per iter should be <= length\")\nif self.vnn_compat_mode:\nif not self.benchmark:\nraise ValueError(\"Use a benchmark name if vnn_compat_mode=True\")\nif self.vnn_compat_mode and self.verifier_kwargs:\nraise ValueError(\n\"Cannot use vnn_compat_mode and \"\n\"verifier_kwargs at the same time.\"\n)\nself.n_iters = math.ceil(self.length / self.added_per_iter)\nself._verify_resources()\ndef _verify_resources(self):\n# Check for duplicates\nseen = set()\nfor r in self.resources:\nif r[0] in seen:\nraise ValueError(f\"Duplicate name '{r[0]}' in resources\")\nif r[0] not in self.verifiers:\nraise ValueError(f\"{r[0]} in resources but not in verifiers.\")\nseen.add(r[0])\nfor v in self.verifiers:\nif v not in seen:\nraise ValueError(\nf\"Verifier '{v}' in verifiers but not in resources.\"\n)\nif self.resource_strategy == ResourceStrategy.Auto:\nfor r in self.resources:\nif r[1] != 0:\nraise ValueError(\n\"CPU resources should be 0 when using `Auto`\"\n)\nelse:\nraise NotImplementedError(\nf\"ResourceStrategy {self.resource_strategy} \"\nf\"is not implemented yet.\"\n)\ndef get_smac_scenario_kwargs(self) -> dict[str, Any]:\n\"\"\"Return the SMAC scenario kwargs as a dict.\n        Returns:\n            dict[str, Any]: The SMAC scenario as a dict.\n        \"\"\"\nassert self.output_dir is not None  # This is set in `__post_init__`\nself.output_dir.mkdir(parents=True, exist_ok=True)\nreturn {\n\"instances\": verification_instances_to_smac_instances(\nself.instances\n),\n\"instance_features\": index_features(self.instances),\n\"output_directory\": self.output_dir,\n}\ndef get_smac_instances(self) -> list[str]:\n\"\"\"Get the instances of the scenario as SMAC instances.\n        Returns:\n            list[str]: The SMAC instances.\n        \"\"\"\nreturn verification_instances_to_smac_instances(self.instances)\n
"},{"location":"api/#autoverify.portfolio.portfolio.PortfolioScenario.__post_init__","title":"__post_init__()","text":"

Validate the PF scenario.

Source code in autoverify/portfolio/portfolio.py
def __post_init__(self):\n\"\"\"Validate the PF scenario.\"\"\"\nif self.added_per_iter > 1 or self.configs_per_iter > 1:\nraise ValueError(\n\"Adding more than 1 config per iter not supported yet.\"\n)\nif not 0 <= self.alpha <= 1:\nraise ValueError(f\"Alpha should be in [0.0, 1.0], got {self.alpha}\")\nif self.output_dir is None:\ncurrent_time = datetime.datetime.now()\nformatted_time = current_time.strftime(\"%Y-%m-%d %H:%M:%S\")\nself.output_dir = Path(f\"hydra_out/{formatted_time}\")\nself.tune_budget = self.alpha\nself.pick_budget = 1 - self.alpha\nif self.added_per_iter > self.length:\nraise ValueError(\"Entries added per iter should be <= length\")\nif self.vnn_compat_mode:\nif not self.benchmark:\nraise ValueError(\"Use a benchmark name if vnn_compat_mode=True\")\nif self.vnn_compat_mode and self.verifier_kwargs:\nraise ValueError(\n\"Cannot use vnn_compat_mode and \"\n\"verifier_kwargs at the same time.\"\n)\nself.n_iters = math.ceil(self.length / self.added_per_iter)\nself._verify_resources()\n
"},{"location":"api/#autoverify.portfolio.portfolio.PortfolioScenario.get_smac_instances","title":"get_smac_instances()","text":"

Get the instances of the scenario as SMAC instances.

Returns:

Type Description list[str]

list[str]: The SMAC instances.

Source code in autoverify/portfolio/portfolio.py
def get_smac_instances(self) -> list[str]:\n\"\"\"Get the instances of the scenario as SMAC instances.\n    Returns:\n        list[str]: The SMAC instances.\n    \"\"\"\nreturn verification_instances_to_smac_instances(self.instances)\n
"},{"location":"api/#autoverify.portfolio.portfolio.PortfolioScenario.get_smac_scenario_kwargs","title":"get_smac_scenario_kwargs()","text":"

Return the SMAC scenario kwargs as a dict.

Returns:

Type Description dict[str, Any]

dict[str, Any]: The SMAC scenario as a dict.

Source code in autoverify/portfolio/portfolio.py
def get_smac_scenario_kwargs(self) -> dict[str, Any]:\n\"\"\"Return the SMAC scenario kwargs as a dict.\n    Returns:\n        dict[str, Any]: The SMAC scenario as a dict.\n    \"\"\"\nassert self.output_dir is not None  # This is set in `__post_init__`\nself.output_dir.mkdir(parents=True, exist_ok=True)\nreturn {\n\"instances\": verification_instances_to_smac_instances(\nself.instances\n),\n\"instance_features\": index_features(self.instances),\n\"output_directory\": self.output_dir,\n}\n
"},{"location":"api/#autoverify.portfolio.portfolio_runner.PortfolioRunner","title":"PortfolioRunner","text":"

Class to run a portfolio in parallel.

Source code in autoverify/portfolio/portfolio_runner.py
class PortfolioRunner:\n\"\"\"Class to run a portfolio in parallel.\"\"\"\ndef __init__(\nself,\nportfolio: Portfolio,\nvbs_mode: bool = False,\n*,\nn_cpu: int | None = None,\nn_gpu: int | None = None,\n):\n\"\"\"Initialize a new portfolio runner.\n        Arguments:\n            portfolio: The portfolio that will be run.\n            vbs_mode: If the PF will be run in VBS mode.\n            n_cpu: Override number of CPUs\n            n_gpu: Override number of GPUs.\n        \"\"\"\nself._portfolio = portfolio\nself._vbs_mode = vbs_mode\nself._n_cpu = n_cpu\nself._n_gpu = n_gpu\nif not self._vbs_mode:\nself._init_resources()\nself._is_cleaning = False\ndef _wrap_cleanup(*_):\nif self._is_cleaning:\nreturn\nself._is_cleaning = True\nself._cleanup()\nself._is_cleaning = False\nsignal.signal(signal.SIGINT, _wrap_cleanup)\nsignal.signal(signal.SIGTERM, _wrap_cleanup)\ndef _init_resources(self):\nself._allocation: dict[ConfiguredVerifier, tuple[int, int, int]] = {}\nif self._n_cpu:\ncpu_left = self._n_cpu\nelse:\ncpu_left = cpu_count()\nif self._n_gpu:\ngpu_left = self._n_gpu\nelse:\ngpu_left = nvidia_gpu_count()\nfor cv in self._portfolio:\nif cv.resources is None:\nraise ValueError(\n\"No resources for\"\nf\"{cv.verifier} :: {cv.configuration} found.\"\n)\n# CPU (taskset) and GPU (CUDA_VISIBLE_DEVICES) index start at 0\nn_cpu, n_gpu = cv.resources[0], cv.resources[1]\nif n_gpu > gpu_left:\nraise RuntimeError(\"No more GPUs left\")\nif n_cpu > cpu_left:\nraise RuntimeError(\"No more CPUs left\")\nif n_cpu <= 0:\nraise RuntimeError(\"CPUs should be > 0\")\n# Currently only support 1 GPU per verifier\nif n_gpu > 0:\ncurr_gpu = gpu_left - 1\ngpu_left -= 1\nelse:\ncurr_gpu = -1\ncpu_high = cpu_left\ncpu_low = cpu_left - n_cpu\ncpu_left = cpu_low\nself._allocation[cv] = (cpu_low, cpu_high - 1, curr_gpu)\ndef evaluate_vbs(\nself,\ninstances: list[VerificationInstance],\n*,\nout_csv: Path | None = None,\nvnncompat: bool = False,\nbenchmark: str | None = None,\n) -> _VbsResult:\n\"\"\"Evaluate the PF in vbs mode.\n        Arguments:\n            instances: Instances to evaluate.\n            out_csv: File where the results are written to.\n            vnncompat: Use some compat kwargs.\n            benchmark: Only if vnncompat, benchmark name.\n        \"\"\"\nresults: _CostDict = {}\nif vnncompat and benchmark is None:\nraise ValueError(\"Need a benchmark if vnncompat=True\")\nelif not vnncompat and benchmark is not None:\nraise ValueError(\"Only use benchmark if vnncompat=True\")\nfor cv in self._portfolio:\nassert cv.resources is not None\nfor instance in instances:\nverifier = _get_verifier(instance, cv, vnncompat, benchmark)\nlogger.info(f\"{cv.verifier} on {str(instance)}\")\nresult = verifier.verify_instance(instance)\nself._add_result(cv, instance, result, results)\nif out_csv:\nself._csv_log_result(\nout_csv,\nresult,\ninstance,\ncv.verifier,\ncv.configuration,\n)\nreturn self._vbs_from_cost_dict(results)\n@staticmethod\ndef _csv_log_result(\nout_csv: Path,\nresult: CompleteVerificationResult,\ninstance: VerificationInstance,\nverifier: str,\nconfiguration: Configuration,\n):\nif isinstance(result, Ok):\nres_d = result.unwrap()\nsuccess = \"OK\"\nelif isinstance(result, Err):\nres_d = result.unwrap_err()\nsuccess = \"ERR\"\ninst_d = {\n\"network\": instance.network,\n\"property\": instance.property,\n\"timeout\": instance.timeout,\n\"verifier\": verifier,\n\"config\": configuration,\n\"success\": success,\n}\nif not out_csv.exists():\ninit_verification_result_csv(out_csv)\nvdr = VerificationDataResult.from_verification_result(res_d, inst_d)\ncsv_append_verification_result(vdr, out_csv)\n@staticmethod\ndef _vbs_from_cost_dict(cost_dict: _CostDict) -> _VbsResult:\nvbs: _VbsResult = {}\nfor cv, instance_costs in cost_dict.items():\nfor instance, cost in instance_costs.items():\nstr_inst = str(instance)\nif str_inst not in vbs:\nvbs[str_inst] = (cost, cv.verifier)\ncontinue\nif cost < vbs[str_inst][0]:\nvbs[str_inst] = (cost, cv.verifier)\nreturn vbs\n@staticmethod\ndef _add_result(\ncv: ConfiguredVerifier,\ninstance: VerificationInstance,\nresult: CompleteVerificationResult,\nresults: _CostDict,\n):\ncost: float\nif isinstance(result, Ok):\ncost = result.unwrap().took\nelif isinstance(result, Err):\ncost = float(\"inf\")\nlogger.info(f\"Cost: {cost}\")\nif cv not in results:\nresults[cv] = {}\nresults[cv][instance] = cost\n# TODO: Arg types\ndef _get_verifiers(\nself,\ninstance,\nvnncompat,\nbenchmark,\nverifier_kwargs,\n) -> dict[ConfiguredVerifier, CompleteVerifier]:\nverifiers: dict[ConfiguredVerifier, CompleteVerifier] = {}\nfor cv in self._portfolio:\nv = _get_verifier(\ninstance,\ncv,\nvnncompat,\nbenchmark,\nverifier_kwargs,\nself._allocation,\n)\nassert isinstance(v, CompleteVerifier)\nverifiers[cv] = v\nreturn verifiers\ndef verify_instances(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nout_csv: Path | None = None,\nvnncompat: bool = False,\nbenchmark: str | None = None,\nverifier_kwargs: dict[str, dict[str, Any]] | None = None,\nuses_simplified_network: Iterable[str] | None = None,\n) -> dict[VerificationInstance, VerificationDataResult]:\n\"\"\"Run the PF in parallel.\n        Arguments:\n            instances: Instances to evaluate.\n            out_csv: File where the results are written to.\n            vnncompat: Use some compat kwargs.\n            benchmark: Only if vnncompat, benchmark name.\n            verifier_kwargs: Kwargs passed to verifiers.\n            uses_simplified_network: Have some verifiers use simplified nets.\n        \"\"\"\nif self._vbs_mode:\nraise RuntimeError(\"Function not compatible with vbs_mode\")\nif out_csv:\nout_csv = out_csv.expanduser().resolve()\nresults: dict[VerificationInstance, VerificationDataResult] = {}\nwith concurrent.futures.ThreadPoolExecutor() as executor:\nfor instance in instances:\nlogger.info(f\"Running portfolio on {str(instance)}\")\nfutures: dict[\nFuture[CompleteVerificationResult], ConfiguredVerifier\n] = {}\nself._verifiers = self._get_verifiers(\ninstance,\nvnncompat,\nbenchmark,\nverifier_kwargs,\n)\nis_solved = False\nfor cv in self._portfolio:\nif (\nuses_simplified_network\nand cv.verifier in uses_simplified_network\n):\ntarget_instance = instance.as_simplified_network()\nelse:\ntarget_instance = instance\nfuture = executor.submit(\nself._verifiers[cv].verify_instance, target_instance\n)\nfutures[future] = cv\nfor future in concurrent.futures.as_completed(futures):\nfut_cv = futures[future]\nresult = future.result()\nif not is_solved:\ngot_solved = self._process_result(\nresult, results, fut_cv, instance, self._verifiers\n)\nif got_solved and not is_solved:\nis_solved = True\nif out_csv:\nself._csv_log_result(\nout_csv,\nresult,\ninstance,\nfut_cv.verifier,\nfut_cv.configuration,\n)\nreturn results\ndef _process_result(\nself,\nresult: CompleteVerificationResult,\nresults: dict[VerificationInstance, VerificationDataResult],\ncv: ConfiguredVerifier,\ninstance: VerificationInstance,\nverifiers: dict[ConfiguredVerifier, CompleteVerifier],\n) -> bool:\ninstance_data: dict[str, Any] = {\n\"network\": instance.network,\n\"property\": instance.property,\n\"timeout\": instance.timeout,\n\"verifier\": cv.verifier,\n\"config\": cv.configuration,\n}\nif isinstance(result, Ok):\nr = result.unwrap()\nif r.result == \"TIMEOUT\":\nlog_string = f\"{cv.verifier} timed out\"\nelse:\nself._cancel_running(cv, verifiers)\nlog_string = (\nf\"Verified by {cv.verifier} in {r.took:.2f} sec, \"\nf\"result = {r.result}\"\n)\ninstance_data[\"success\"] = \"OK\"\nresults[instance] = VerificationDataResult.from_verification_result(\nr, instance_data\n)\n# Signal that the instance was solved\nif r.result in [\"SAT\", \"UNSAT\"]:\nlogger.info(log_string)\nreturn True\nelif isinstance(result, Err):\nlog_string = f\"{cv.verifier} errored.\"\nr = result.unwrap_err()\ninstance_data[\"success\"] = \"ERR\"\nresults[instance] = VerificationDataResult.from_verification_result(\nr, instance_data\n)\nlogger.info(log_string)\n# Instance was not solved\nreturn False\ndef _cancel_running(\nself,\nfirst_cv: ConfiguredVerifier,\nverifiers: dict[ConfiguredVerifier, CompleteVerifier],\n):\nothers = set_iter_except(self._portfolio.get_set(), first_cv)\nfor other_cv in others:\nverifiers[other_cv].set_timeout_event()\ndef _cleanup(self):\n\"\"\"Kill all running verifiers processes.\"\"\"\nif not self._verifiers:\nsys.exit(0)\nfor verifier_inst in self._verifiers.values():\ntry:\nverifier_inst.set_timeout_event()\nfinally:\npass\nsys.exit(0)\n
"},{"location":"api/#autoverify.portfolio.portfolio_runner.PortfolioRunner.__init__","title":"__init__(portfolio, vbs_mode=False, *, n_cpu=None, n_gpu=None)","text":"

Initialize a new portfolio runner.

Parameters:

Name Type Description Default portfolio Portfolio

The portfolio that will be run.

required vbs_mode bool

If the PF will be run in VBS mode.

False n_cpu int | None

Override number of CPUs

None n_gpu int | None

Override number of GPUs.

None Source code in autoverify/portfolio/portfolio_runner.py
def __init__(\nself,\nportfolio: Portfolio,\nvbs_mode: bool = False,\n*,\nn_cpu: int | None = None,\nn_gpu: int | None = None,\n):\n\"\"\"Initialize a new portfolio runner.\n    Arguments:\n        portfolio: The portfolio that will be run.\n        vbs_mode: If the PF will be run in VBS mode.\n        n_cpu: Override number of CPUs\n        n_gpu: Override number of GPUs.\n    \"\"\"\nself._portfolio = portfolio\nself._vbs_mode = vbs_mode\nself._n_cpu = n_cpu\nself._n_gpu = n_gpu\nif not self._vbs_mode:\nself._init_resources()\nself._is_cleaning = False\ndef _wrap_cleanup(*_):\nif self._is_cleaning:\nreturn\nself._is_cleaning = True\nself._cleanup()\nself._is_cleaning = False\nsignal.signal(signal.SIGINT, _wrap_cleanup)\nsignal.signal(signal.SIGTERM, _wrap_cleanup)\n
"},{"location":"api/#autoverify.portfolio.portfolio_runner.PortfolioRunner.evaluate_vbs","title":"evaluate_vbs(instances, *, out_csv=None, vnncompat=False, benchmark=None)","text":"

Evaluate the PF in vbs mode.

Parameters:

Name Type Description Default instances list[VerificationInstance]

Instances to evaluate.

required out_csv Path | None

File where the results are written to.

None vnncompat bool

Use some compat kwargs.

False benchmark str | None

Only if vnncompat, benchmark name.

None Source code in autoverify/portfolio/portfolio_runner.py
def evaluate_vbs(\nself,\ninstances: list[VerificationInstance],\n*,\nout_csv: Path | None = None,\nvnncompat: bool = False,\nbenchmark: str | None = None,\n) -> _VbsResult:\n\"\"\"Evaluate the PF in vbs mode.\n    Arguments:\n        instances: Instances to evaluate.\n        out_csv: File where the results are written to.\n        vnncompat: Use some compat kwargs.\n        benchmark: Only if vnncompat, benchmark name.\n    \"\"\"\nresults: _CostDict = {}\nif vnncompat and benchmark is None:\nraise ValueError(\"Need a benchmark if vnncompat=True\")\nelif not vnncompat and benchmark is not None:\nraise ValueError(\"Only use benchmark if vnncompat=True\")\nfor cv in self._portfolio:\nassert cv.resources is not None\nfor instance in instances:\nverifier = _get_verifier(instance, cv, vnncompat, benchmark)\nlogger.info(f\"{cv.verifier} on {str(instance)}\")\nresult = verifier.verify_instance(instance)\nself._add_result(cv, instance, result, results)\nif out_csv:\nself._csv_log_result(\nout_csv,\nresult,\ninstance,\ncv.verifier,\ncv.configuration,\n)\nreturn self._vbs_from_cost_dict(results)\n
"},{"location":"api/#autoverify.portfolio.portfolio_runner.PortfolioRunner.verify_instances","title":"verify_instances(instances, *, out_csv=None, vnncompat=False, benchmark=None, verifier_kwargs=None, uses_simplified_network=None)","text":"

Run the PF in parallel.

Parameters:

Name Type Description Default instances Iterable[VerificationInstance]

Instances to evaluate.

required out_csv Path | None

File where the results are written to.

None vnncompat bool

Use some compat kwargs.

False benchmark str | None

Only if vnncompat, benchmark name.

None verifier_kwargs dict[str, dict[str, Any]] | None

Kwargs passed to verifiers.

None uses_simplified_network Iterable[str] | None

Have some verifiers use simplified nets.

None Source code in autoverify/portfolio/portfolio_runner.py
def verify_instances(\nself,\ninstances: Iterable[VerificationInstance],\n*,\nout_csv: Path | None = None,\nvnncompat: bool = False,\nbenchmark: str | None = None,\nverifier_kwargs: dict[str, dict[str, Any]] | None = None,\nuses_simplified_network: Iterable[str] | None = None,\n) -> dict[VerificationInstance, VerificationDataResult]:\n\"\"\"Run the PF in parallel.\n    Arguments:\n        instances: Instances to evaluate.\n        out_csv: File where the results are written to.\n        vnncompat: Use some compat kwargs.\n        benchmark: Only if vnncompat, benchmark name.\n        verifier_kwargs: Kwargs passed to verifiers.\n        uses_simplified_network: Have some verifiers use simplified nets.\n    \"\"\"\nif self._vbs_mode:\nraise RuntimeError(\"Function not compatible with vbs_mode\")\nif out_csv:\nout_csv = out_csv.expanduser().resolve()\nresults: dict[VerificationInstance, VerificationDataResult] = {}\nwith concurrent.futures.ThreadPoolExecutor() as executor:\nfor instance in instances:\nlogger.info(f\"Running portfolio on {str(instance)}\")\nfutures: dict[\nFuture[CompleteVerificationResult], ConfiguredVerifier\n] = {}\nself._verifiers = self._get_verifiers(\ninstance,\nvnncompat,\nbenchmark,\nverifier_kwargs,\n)\nis_solved = False\nfor cv in self._portfolio:\nif (\nuses_simplified_network\nand cv.verifier in uses_simplified_network\n):\ntarget_instance = instance.as_simplified_network()\nelse:\ntarget_instance = instance\nfuture = executor.submit(\nself._verifiers[cv].verify_instance, target_instance\n)\nfutures[future] = cv\nfor future in concurrent.futures.as_completed(futures):\nfut_cv = futures[future]\nresult = future.result()\nif not is_solved:\ngot_solved = self._process_result(\nresult, results, fut_cv, instance, self._verifiers\n)\nif got_solved and not is_solved:\nis_solved = True\nif out_csv:\nself._csv_log_result(\nout_csv,\nresult,\ninstance,\nfut_cv.verifier,\nfut_cv.configuration,\n)\nreturn results\n
"},{"location":"api/#util","title":"Util","text":"

summary.

VerificationInstance.

Verifier VNNCOMP compatability.

Return verifier instances that should be compatible with the given benchmark + instance.

"},{"location":"api/#autoverify.util.instances.VerificationDataResult","title":"VerificationDataResult dataclass","text":"

summary.

Source code in autoverify/util/instances.py
@dataclass\nclass VerificationDataResult:\n\"\"\"_summary_.\"\"\"\nnetwork: str\nproperty: str\ntimeout: int | None\nverifier: str\nconfig: str\nsuccess: Literal[\"OK\", \"ERR\"]\nresult: VerificationResultString\ntook: float\ncounter_example: str | tuple[str, str] | None\nstderr: str | None\nstdout: str | None\ndef __post_init__(self):\n\"\"\"_summary_.\"\"\"\nif self.config == \"None\":\nself.config = \"default\"\ndef as_csv_row(self) -> list[str]:\n\"\"\"Convert data to a csv row writeable.\"\"\"\nif isinstance(self.counter_example, tuple):\nself.counter_example = \"\\n\".join(self.counter_example)\nreturn [\nself.network,\nself.property,\nstr(self.timeout),\nself.verifier,\nself.config,\nself.success,\nself.result,\nstr(self.took),\nself.counter_example or \"\",\nself.stderr or \"\",\nself.stdout or \"\",\n]\n@classmethod\ndef from_verification_result(\ncls,\nverif_res: CompleteVerificationData,\ninstance_data: dict[str, Any],  # TODO: Narrow Any\n):\n\"\"\"Create from a `CompleteVerificationData`.\"\"\"\n# TODO: Unpack dict\nreturn cls(\ninstance_data[\"network\"],\ninstance_data[\"property\"],\ninstance_data[\"timeout\"],\ninstance_data[\"verifier\"],\ninstance_data[\"config\"],\ninstance_data[\"success\"],\nverif_res.result,\nverif_res.took,\nverif_res.counter_example,\nverif_res.err,\nverif_res.stdout,\n)\n
"},{"location":"api/#autoverify.util.instances.VerificationDataResult.__post_init__","title":"__post_init__()","text":"

summary.

Source code in autoverify/util/instances.py
def __post_init__(self):\n\"\"\"_summary_.\"\"\"\nif self.config == \"None\":\nself.config = \"default\"\n
"},{"location":"api/#autoverify.util.instances.VerificationDataResult.as_csv_row","title":"as_csv_row()","text":"

Convert data to a csv row writeable.

Source code in autoverify/util/instances.py
def as_csv_row(self) -> list[str]:\n\"\"\"Convert data to a csv row writeable.\"\"\"\nif isinstance(self.counter_example, tuple):\nself.counter_example = \"\\n\".join(self.counter_example)\nreturn [\nself.network,\nself.property,\nstr(self.timeout),\nself.verifier,\nself.config,\nself.success,\nself.result,\nstr(self.took),\nself.counter_example or \"\",\nself.stderr or \"\",\nself.stdout or \"\",\n]\n
"},{"location":"api/#autoverify.util.instances.VerificationDataResult.from_verification_result","title":"from_verification_result(verif_res, instance_data) classmethod","text":"

Create from a CompleteVerificationData.

Source code in autoverify/util/instances.py
@classmethod\ndef from_verification_result(\ncls,\nverif_res: CompleteVerificationData,\ninstance_data: dict[str, Any],  # TODO: Narrow Any\n):\n\"\"\"Create from a `CompleteVerificationData`.\"\"\"\n# TODO: Unpack dict\nreturn cls(\ninstance_data[\"network\"],\ninstance_data[\"property\"],\ninstance_data[\"timeout\"],\ninstance_data[\"verifier\"],\ninstance_data[\"config\"],\ninstance_data[\"success\"],\nverif_res.result,\nverif_res.took,\nverif_res.counter_example,\nverif_res.err,\nverif_res.stdout,\n)\n
"},{"location":"api/#autoverify.util.instances.csv_append_verification_result","title":"csv_append_verification_result(verification_result, csv_path)","text":"

summary.

Source code in autoverify/util/instances.py
def csv_append_verification_result(\nverification_result: VerificationDataResult, csv_path: Path\n):\n\"\"\"_summary_.\"\"\"\nwith open(str(csv_path.expanduser()), \"a\") as csv_file:\nwriter = csv.writer(csv_file)\nwriter.writerow(verification_result.as_csv_row())\n
"},{"location":"api/#autoverify.util.instances.init_verification_result_csv","title":"init_verification_result_csv(csv_path)","text":"

summary.

Source code in autoverify/util/instances.py
def init_verification_result_csv(csv_path: Path):\n\"\"\"_summary_.\"\"\"\nwith open(str(csv_path.expanduser()), \"w\") as csv_file:\nwriter = csv.writer(csv_file)\nwriter.writerow(get_dataclass_field_names(VerificationDataResult))\n
"},{"location":"api/#autoverify.util.instances.read_all_vnncomp_instances","title":"read_all_vnncomp_instances(vnncomp_path)","text":"

Reads all benchmarks, see the read_vnncomp_instances docstring.

Source code in autoverify/util/instances.py
def read_all_vnncomp_instances(\nvnncomp_path: Path,\n) -> dict[str, list[VerificationInstance]]:\n\"\"\"Reads all benchmarks, see the `read_vnncomp_instances` docstring.\"\"\"\ninstances: dict[str, list[VerificationInstance]] = {}\nfor path in vnncomp_path.iterdir():\nif not path.is_dir():\ncontinue\ninstances[path.name] = read_vnncomp_instances(path.name, vnncomp_path)\nreturn instances\n
"},{"location":"api/#autoverify.util.instances.read_verification_result_from_csv","title":"read_verification_result_from_csv(csv_path)","text":"

Reads a verification results csv to a list of its dataclass.

Source code in autoverify/util/instances.py
def read_verification_result_from_csv(\ncsv_path: Path,\n) -> list[VerificationDataResult]:\n\"\"\"Reads a verification results csv to a list of its dataclass.\"\"\"\nresults_df = pd.read_csv(csv_path)\nverification_results: list[VerificationDataResult] = []\nfor _, row in results_df.iterrows():\nrow = row.to_list()\nverification_results.append(VerificationDataResult(*row))\nreturn verification_results\n
"},{"location":"api/#autoverify.util.instances.read_vnncomp_instances","title":"read_vnncomp_instances(benchmark, vnncomp_path, *, predicate=None, as_smac=False, resolve_paths=True, instance_file_name='instances.csv')","text":"

Read the instances of a VNNCOMP benchmark.

Reads the CSV file of a VNNCOMP benchmark, parsing the network, property and timeout values.

Parameters:

Name Type Description Default benchmark str

The name of the benchmark directory.

required vnncomp_path Path

The path to the VNNCOMP benchmark directory

required predicate _InstancePredicate | Iterable[_InstancePredicate] | None

A function that, given a VerificationInstance returns either True or False. If False is returned, the VerificationInstance is dropped from the returned instances.

None as_smac bool

Return the instances as smac instance strings.

False

Returns:

Type Description list[VerificationInstance] | list[str]

list[VerificationInstance] | list[str]: A list of VerificationInstance or string objects that hold the network, property and timeout.

Source code in autoverify/util/instances.py
def read_vnncomp_instances(\nbenchmark: str,\nvnncomp_path: Path,\n*,\npredicate: _InstancePredicate | Iterable[_InstancePredicate] | None = None,\nas_smac: bool = False,\nresolve_paths: bool = True,\ninstance_file_name: str = \"instances.csv\",\n) -> list[VerificationInstance] | list[str]:\n\"\"\"Read the instances of a VNNCOMP benchmark.\n    Reads the CSV file of a VNNCOMP benchmark, parsing the network, property and\n    timeout values.\n    Args:\n        benchmark: The name of the benchmark directory.\n        vnncomp_path: The path to the VNNCOMP benchmark directory\n        predicate: A function that, given a `VerificationInstance` returns\n            either True or False. If False is returned, the\n            `VerificationInstance` is dropped from the returned instances.\n        as_smac: Return the instances as smac instance strings.\n    Returns:\n        list[VerificationInstance] | list[str]: A list of\n            `VerificationInstance` or string objects\n            that hold the network, property and timeout.\n    \"\"\"\nif not vnncomp_path.is_dir():\nraise ValueError(\"Could not find VNNCOMP directory\")\nbenchmark_dir = vnncomp_path / benchmark\nif not benchmark_dir.is_dir():\nraise ValueError(\nf\"{benchmark} is not a valid benchmark in {str(vnncomp_path)}\"\n)\ninstances = benchmark_dir / instance_file_name\nverification_instances = []\nif predicate and not isinstance(predicate, Iterable):\npredicate = [predicate]\nwith open(str(instances)) as csv_file:\nreader = csv.reader(csv_file)\nfor row in reader:\nnetwork, property, timeout = row\nnet_path = Path(str(benchmark_dir / network))\nprop_path = Path(str(benchmark_dir / property))\ninstance = VerificationInstance(\nnet_path if not resolve_paths else net_path.resolve(),\nprop_path if not resolve_paths else prop_path.resolve(),\nint(float(timeout)),  # FIXME: Timeouts can be floats\n)\nif predicate and not _passes_at_least_1(predicate, instance):\ncontinue\nif as_smac:\ninstance = instance.as_smac_instance()  # type: ignore\nverification_instances.append(instance)\nreturn verification_instances\n
"},{"location":"api/#autoverify.util.instances.unique_networks","title":"unique_networks(instances)","text":"

Utility function to keep only unique networks.

Source code in autoverify/util/instances.py
def unique_networks(\ninstances: Iterable[VerificationInstance],\n) -> list[VerificationInstance]:\n\"\"\"Utility function to keep only unique networks.\"\"\"\nunique = []\nseen: set[str] = set()\nfor instance in instances:\nif instance.network.name not in seen:\nunique.append(instance)\nseen.add(instance.network.name)\nreturn unique\n
"},{"location":"api/#autoverify.util.instances.verification_instances_to_smac_instances","title":"verification_instances_to_smac_instances(instances)","text":"

Convert a list of VerificationInstace objects to SMAC instances.

See the as_smac_instance docstring of the VerificationInstance class for more details.

Parameters:

Name Type Description Default instances Sequence[VerificationInstance]

The list of VerificationInstance objects to convert.

required

Returns:

Type Description list[str]

list[str]: The SMAC instance strings.

Source code in autoverify/util/instances.py
def verification_instances_to_smac_instances(\ninstances: Sequence[VerificationInstance],\n) -> list[str]:\n\"\"\"Convert a list of `VerificationInstace` objects to SMAC instances.\n    See the `as_smac_instance` docstring of the `VerificationInstance` class for\n    more details.\n    Args:\n        instances: The list of `VerificationInstance` objects to convert.\n    Returns:\n        list[str]: The SMAC instance strings.\n    \"\"\"\nreturn [inst.as_smac_instance() for inst in instances]\n
"},{"location":"api/#autoverify.util.instances.write_verification_results_to_csv","title":"write_verification_results_to_csv(results, csv_path)","text":"

Writes a verification results df to a csv.

Source code in autoverify/util/instances.py
def write_verification_results_to_csv(results: pd.DataFrame, csv_path: Path):\n\"\"\"Writes a verification results df to a csv.\"\"\"\nresults.to_csv(csv_path, index=False)\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance","title":"VerificationInstance dataclass","text":"

VerificationInstance class.

Attributes:

Name Type Description network Path

The Path to the network.

property Path

The Path to the property.

timeout int

Maximum wallclock time.

Source code in autoverify/util/verification_instance.py
@dataclass(frozen=True, eq=True)\nclass VerificationInstance:\n\"\"\"VerificationInstance class.\n    Attributes:\n        network: The `Path` to the network.\n        property: The `Path` to the property.\n        timeout: Maximum wallclock time.\n    \"\"\"\nnetwork: Path\nproperty: Path\ntimeout: int\n@classmethod\ndef from_str(cls, str_instance: str):\n\"\"\"Create from a comma seperated string.\"\"\"\nnetwork, property, timeout = str_instance.split(\",\")\nreturn cls(Path(network), Path(property), int(timeout))\ndef __hash__(self):\n\"\"\"Hash the VI.\"\"\"\nreturn hash(\n(\nself.network.expanduser().resolve(),\nself.property.expanduser().resolve(),\nself.timeout,\n)\n)\ndef __str__(self):\n\"\"\"Short string representation of the `VerificationInstance`.\"\"\"\nreturn f\"{self.network.name} :: {self.property.name} :: {self.timeout}\"\ndef as_smac_instance(self) -> str:\n\"\"\"Return the instance in a `f\"{network},{property},{timeout}\"` format.\n        A SMAC instance has to be passed as a single string to the\n        target_function, in which we split the instance string on the comma\n        again to obtain the network, property and timeout.\n        If no timeout is specified, the `DEFAULT_VERIFICATION_TIMEOUT_SEC`\n        global is used.\n        Returns:\n            str: The smac instance string\n        \"\"\"\ntimeout: int = self.timeout or DEFAULT_VERIFICATION_TIMEOUT_SEC\nreturn f\"{str(self.network)},{str(self.property)},{str(timeout)}\"\ndef as_row(self, resolve_paths: bool = True) -> list[str]:\n\"\"\"Returns the instance as a list of strings.\"\"\"\nnet = (\nstr(self.network.expanduser().resolve())\nif resolve_paths\nelse str(self.network)\n)\nprop = (\nstr(self.property.expanduser().resolve())\nif resolve_paths\nelse str(self.property)\n)\nreturn [net, prop, str(self.timeout)]\n# HACK: Assuming some names and layouts here...\ndef as_simplified_network(self) -> VerificationInstance:\n\"\"\"Changes the network path.\n        Assumes a \"onnx_simplified\" dir is present at the same level.\n        \"\"\"\nsimplified_nets_dir = self.network.parent.parent / \"onnx_simplified\"\nreturn VerificationInstance(\nsimplified_nets_dir / self.network.name, self.property, self.timeout\n)\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.__hash__","title":"__hash__()","text":"

Hash the VI.

Source code in autoverify/util/verification_instance.py
def __hash__(self):\n\"\"\"Hash the VI.\"\"\"\nreturn hash(\n(\nself.network.expanduser().resolve(),\nself.property.expanduser().resolve(),\nself.timeout,\n)\n)\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.__str__","title":"__str__()","text":"

Short string representation of the VerificationInstance.

Source code in autoverify/util/verification_instance.py
def __str__(self):\n\"\"\"Short string representation of the `VerificationInstance`.\"\"\"\nreturn f\"{self.network.name} :: {self.property.name} :: {self.timeout}\"\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.as_row","title":"as_row(resolve_paths=True)","text":"

Returns the instance as a list of strings.

Source code in autoverify/util/verification_instance.py
def as_row(self, resolve_paths: bool = True) -> list[str]:\n\"\"\"Returns the instance as a list of strings.\"\"\"\nnet = (\nstr(self.network.expanduser().resolve())\nif resolve_paths\nelse str(self.network)\n)\nprop = (\nstr(self.property.expanduser().resolve())\nif resolve_paths\nelse str(self.property)\n)\nreturn [net, prop, str(self.timeout)]\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.as_simplified_network","title":"as_simplified_network()","text":"

Changes the network path.

Assumes a \"onnx_simplified\" dir is present at the same level.

Source code in autoverify/util/verification_instance.py
def as_simplified_network(self) -> VerificationInstance:\n\"\"\"Changes the network path.\n    Assumes a \"onnx_simplified\" dir is present at the same level.\n    \"\"\"\nsimplified_nets_dir = self.network.parent.parent / \"onnx_simplified\"\nreturn VerificationInstance(\nsimplified_nets_dir / self.network.name, self.property, self.timeout\n)\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.as_smac_instance","title":"as_smac_instance()","text":"

Return the instance in a f\"{network},{property},{timeout}\" format.

A SMAC instance has to be passed as a single string to the target_function, in which we split the instance string on the comma again to obtain the network, property and timeout.

If no timeout is specified, the DEFAULT_VERIFICATION_TIMEOUT_SEC global is used.

Returns:

Name Type Description str str

The smac instance string

Source code in autoverify/util/verification_instance.py
def as_smac_instance(self) -> str:\n\"\"\"Return the instance in a `f\"{network},{property},{timeout}\"` format.\n    A SMAC instance has to be passed as a single string to the\n    target_function, in which we split the instance string on the comma\n    again to obtain the network, property and timeout.\n    If no timeout is specified, the `DEFAULT_VERIFICATION_TIMEOUT_SEC`\n    global is used.\n    Returns:\n        str: The smac instance string\n    \"\"\"\ntimeout: int = self.timeout or DEFAULT_VERIFICATION_TIMEOUT_SEC\nreturn f\"{str(self.network)},{str(self.property)},{str(timeout)}\"\n
"},{"location":"api/#autoverify.util.verification_instance.VerificationInstance.from_str","title":"from_str(str_instance) classmethod","text":"

Create from a comma seperated string.

Source code in autoverify/util/verification_instance.py
@classmethod\ndef from_str(cls, str_instance: str):\n\"\"\"Create from a comma seperated string.\"\"\"\nnetwork, property, timeout = str_instance.split(\",\")\nreturn cls(Path(network), Path(property), int(timeout))\n
"},{"location":"api/#autoverify.util.vnncomp.inst_bench_to_kwargs","title":"inst_bench_to_kwargs(benchmark, verifier, instance)","text":"

Get the kwargs for a benchmark.

Source code in autoverify/util/vnncomp.py
def inst_bench_to_kwargs(\nbenchmark: str,\nverifier: str,\ninstance: VerificationInstance,\n) -> dict[str, Any]:\n\"\"\"Get the kwargs for a benchmark.\"\"\"\nif verifier == \"nnenum\":\nreturn {\"use_auto_settings\": True}\nelif verifier == \"abcrown\":  # TODO: All benchmarks\nif benchmark == \"acasxu\":\nreturn {\"yaml_override\": {\"data__num_outputs\": 5}}\nelif benchmark.startswith(\"sri_resnet_\"):\nreturn {\n\"yaml_override\": {\n\"model__onnx_quirks\": \"{'Reshape': {'fix_batch_size': True}}\"  # noqa: E501\n}\n}\nreturn {}\nelif verifier == \"ovalbab\":\nreturn {}\nelif verifier == \"verinet\":\nif benchmark == \"acasxu\":\nreturn {\"transpose_matmul_weights\": True}\nelif benchmark == \"cifar2020\":\nif instance.network.name.find(\"convBigRELU\") >= 0:\nreturn {\"dnnv_simplify\": True}\nelif benchmark == \"cifar100_tinyimagenet_resnet\":\nreturn {\"dnnv_simplify\": True}\nelif benchmark == \"nn4sys\":\nif instance.network.name == \"lindex.onnx\":\nreturn {\"dnnv_simplify\": True}\nreturn {}\nraise ValueError(\"Invalid verifier\")\n
"},{"location":"api/#autoverify.util.vnncomp.inst_bench_to_verifier","title":"inst_bench_to_verifier(benchmark, instance, verifier, allocation=None)","text":"

Get an instantiated verifier.

Source code in autoverify/util/vnncomp.py
def inst_bench_to_verifier(\nbenchmark: str,\ninstance: VerificationInstance,\nverifier: str,\nallocation: tuple[int, int, int] | None = None,\n) -> CompleteVerifier:\n\"\"\"Get an instantiated verifier.\"\"\"\nverifier_inst = verifier_from_name(verifier)(\n**inst_bench_to_kwargs(benchmark, verifier, instance),\ncpu_gpu_allocation=allocation,\n)\nassert isinstance(verifier_inst, CompleteVerifier)\nreturn verifier_inst\n
"},{"location":"api/#autoverify.util.vnncomp.inst_bench_verifier_config","title":"inst_bench_verifier_config(benchmark, instance, verifier, configs_dir)","text":"

Return the verifier and the VNNCOMP config.

Source code in autoverify/util/vnncomp.py
def inst_bench_verifier_config(\nbenchmark: str,\ninstance: VerificationInstance,\nverifier: str,\nconfigs_dir: Path,\n) -> Configuration | Path | None:\n\"\"\"Return the verifier and the VNNCOMP config.\"\"\"\nif benchmark == \"test_props\":\nreturn None\nif verifier == \"nnenum\":\nreturn None\nelif verifier == \"abcrown\":\ncfg_file = _get_abcrown_config(benchmark, instance)\nreturn Path(configs_dir / \"abcrown\" / cfg_file)\nelif verifier == \"ovalbab\":\ncfg = _get_ovalbab_config(benchmark)\nif cfg is not None:\nreturn Path(configs_dir / \"ovalbab\" / cfg)\nreturn cfg\nelif verifier == \"verinet\":\nreturn _get_verinet_config(benchmark)\nraise ValueError(\"Invalid verifier\")\n
"},{"location":"how-to-guides/","title":"How-To-Guides","text":""},{"location":"how-to-guides/#verifying-properties","title":"Verifying Properties","text":""},{"location":"how-to-guides/#simple-example","title":"Simple Example","text":"

After installing one or more verifiers, here is how to use them to verify a property. Networks should be in the ONNX format, properties in the VNNLIB format.

from pathlib import Path\nfrom result import Err, Ok\nfrom autoverify.verifier import AbCrown\nif __name__ == \"__main__\":\nverifier = AbCrown()\nnetwork = Path(\"my_network.onnx\")\nprop = Path(\"my_property.vnnlib\")\nresult = verifier.verify_property(network, prop)\nif isinstance(result, Ok):\noutcome = result.unwrap().result\nprint(\"Verification finished, result:\", outcome)\nelif isinstance(result, Err):\nprint(\"Error during verification:\")\nprint(result.unwrap_err().stdout)\n
"},{"location":"how-to-guides/#running-vnncomp-benchmarks","title":"Running VNNCOMP Benchmarks","text":"

Auto-Verify supports reading benchmarks defined in VNNCOMP style, which are benchmarks with the following structure:

vnncomp2022\n\u2514\u2500\u2500 test_props\n    \u251c\u2500\u2500 instances.csv\n    \u251c\u2500\u2500 onnx\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 test_nano.onnx\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 test_sat.onnx\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 test_unsat.onnx\n    \u2514\u2500\u2500 vnnlib\n        \u251c\u2500\u2500 test_nano.vnnlib\n        \u2514\u2500\u2500 test_prop.vnnlib\n

Where instances.csv is a csv file with 3 columns: network, property, timeout. For example, the test_props directory contains the following 3 verification instaces:

onnx/test_sat.onnx,vnnlib/test_prop.vnnlib,60\nonnx/test_unsat.onnx,vnnlib/test_prop.vnnlib,60\nonnx/test_nano.onnx,vnnlib/test_nano.vnnlib,60\n

VNNCOMP Benchmarks can found at the following links: 2022, 2023. Make sure to unzip all files inside the benchmark after you have downloaded it.

Below is a code snippet that runs this benchmark. Note the Path pointing to the benchmark location.

from pathlib import Path\nfrom result import Err, Ok\nfrom autoverify.util.instances import read_vnncomp_instances\nfrom autoverify.verifier import Nnenum\ntest_props = read_vnncomp_instances(\n\"test_props\", vnncomp_path=Path(\"../benchmark/vnncomp2022\")\n)\nif __name__ == \"__main__\":\nverifier = Nnenum()\nfor instance in test_props:\nprint(instance)\nresult = verifier.verify_instance(instance)\nif isinstance(result, Ok):\noutcome = result.unwrap().result\nprint(\"Verification finished, result:\", outcome)\nelif isinstance(result, Err):\nprint(\"Error during verification:\")\nprint(result.unwrap_err().stdout)\n
"},{"location":"how-to-guides/#algorithm-configuration","title":"Algorithm Configuration","text":"

Each of the verification tools comes equipped with a ConfigurationSpace, which can be used to sample Configuration for a verification tool. For example:

from autoverify.verifier import Nnenum\nif __name__ == \"__main__\":\nverifier = Nnenum()\nconfig = verifier.config_space.sample_configuration()\nprint(config)\n
"},{"location":"how-to-guides/#smac-example","title":"SMAC Example","text":"

We can apply algorithm configuration techniques (or hyperparameter optimization) using SMAC. In the example below, we try to find a configuration for AB-CROWN on the first 10 instances of the mnist_fc benchmark from VNNCOMP2022.

import sys\nfrom pathlib import Path\nfrom ConfigSpace import Configuration\nfrom result import Err, Ok\nfrom smac import AlgorithmConfigurationFacade, Scenario\nfrom autoverify.util.instances import (\nread_vnncomp_instances,\nverification_instances_to_smac_instances,\n)\nfrom autoverify.util.smac import index_features\nfrom autoverify.util.verification_instance import VerificationInstance\nfrom autoverify.verifier import AbCrown\nmnist = read_vnncomp_instances(\"mnist_fc\", vnncomp_path=Path(\"../benchmark/vnncomp2022\"))[:10]\nverifier = AbCrown()\ndef target_function(config: Configuration, instance: str, seed: int):\nseed += 1  # Mute unused var warnings; (cant rename to _)\nverif_inst = VerificationInstance.from_str(instance)\nresult = verifier.verify_instance(verif_inst, config=config)\n# SMAC handles exception by setting cost to `inf`\nverification_result = result.unwrap_or_raise(Exception)\nreturn float(verification_result.took)\nif __name__ == \"__main__\":\ncfg_space = verifier.config_space\nname = verifier.name\ninstances = verification_instances_to_smac_instances(mnist)\nscenario = Scenario(\ncfg_space,\nname=\"ab_tune\",\ninstances=instances,\ninstance_features=index_features(instances),\nwalltime_limit=600,\noutput_directory=Path(\"ab_tune_out\"),\nn_trials=sys.maxsize,  # Using walltime limit instead\n)\nsmac = AlgorithmConfigurationFacade(\nscenario, target_function, overwrite=True\n)\ninc = smac.optimize()\nprint(inc)\n

For more examples on how to use SMAC, please refer to the SMAC documentation.

"},{"location":"how-to-guides/#parallel-portfolios","title":"Parallel Portfolios","text":"

Note

Custom verification tools are not yet supported for parallel portfolios.

"},{"location":"how-to-guides/#constructiong-a-portfolio","title":"Constructiong a Portfolio","text":"

Portfolios can be constructed as shown below. In this example, we try construct a portfolio using the Hydra algorithm on the mnist_fc benchmark from VNNCOMP2022. We include four verification tools that are able to be included and give the procedure a walltime limit of 24 hours.

from pathlib import Path\nfrom autoverify.portfolio import Hydra, PortfolioScenario\nfrom autoverify.util.instances import read_vnncomp_instances\nbenchmark = read_vnncomp_instances(\"mnist_fc\", vnncomp_path=Path(\"../benchmark/vnncomp2022\"))\nif __name__ == \"__main__\":\npf_scenario = PortfolioScenario(\n[\"nnenum\", \"abcrown\", \"ovalbab\", \"verinet\"],\n[\n(\"nnenum\", 0, 0),\n(\"verinet\", 0, 1),\n(\"abcrown\", 0, 1),\n(\"ovalbab\", 0, 1),\n],\nbenchmark,\n4,\n(60 * 60 * 24) / 4,\nalpha=0.9,\noutput_dir=Path(\"PF_mnist_fc\"),\n)\nhydra = Hydra(pf_scenario)\npf = hydra.tune_portfolio()\npf.to_json(Path(\"mnist_fc_portfolio.json\"))\n
"},{"location":"how-to-guides/#running-a-portfolio","title":"Running a Portfolio","text":"

Portfolios can be read from a json or by specifying the verification tools in Python code. Below is an example of how to run a portfolio in parallel on some instances. Lets take the portfolio we created in the previous example and run it on the same benchmark.

from pathlib import Path\nfrom autoverify.portfolio import Portfolio, PortfolioRunner\nfrom autoverify.util.instances import read_vnncomp_instances\nbenchmark = read_vnncomp_instances(\"mnist_fc\", vnncomp_path=Path(\"../benchmark/vnncomp2022\"))\nif __name__ == \"__main__\":\nmnist_pf = Portfolio.from_json(Path(\"mnist_fc_portfolio.json\"))\npf_runner = PortfolioRunner(mnist_pf)\npf_runner.verify_instances(\nbenchmark,\nout_csv=Path(\"PF_mnist_fc_results.csv\"),\n)\n
"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 706669ae478010b5eec7264e2e4cc52da2cdee6a..868de345f3fc41834b5f60e05fe1ee5a747a7203 100644 GIT binary patch delta 13 Ucmb=gXP58h;OK5Bn8;oM038hkO8@`> delta 13 Ucmb=gXP58h;D|{Io5)@P02=iJ