Skip to content

Adding Metrics from External Packages

Andrew Borgman edited this page Apr 6, 2023 · 2 revisions

Background

This evolving document can be viewed as a field guide for adding new metrics that are computed in external packages to the suite of metrics and assessments available within riskmetric. This approach hopes to strike a balance between two of the riskmetric development team's goals that can, at times, be at odds with one another:

  • Allow for extensibility via a flexible S3-based design that enables additional metrics and assessments to be added to the process without changing the core end-user work flow.
  • Keep the dependency footprint small

Additional history regarding the design decisions made in this approach can be found here and here.

Currently, this document will cover implementing the following approach:

Option 2: All metrics created in riskmetric, but only run if Suggests dependency is installed

As a way of mitigating this management, perhaps we could only turn on these behaviors when those dependencies are available. However, this feels like an enormous amount of dependency management, which really should be handled by the package installation instead.

This guidance is likely to evolve over time and this document should be kept up-to-date as the development team's thinking evolves.

Option 2: All metrics created in riskmetric, but only run if Suggests dependency is installed

Hooks to all_assessments-based workflows

The goal of this approach is to "magically" make new metrics available to the end user based on the availability of dependency packages listed in Suggests. In short, assessments and metrics will be present in the all_assessments-based work flows when dependency packages are available and will not be found if the dependency is not available within the end-user's local library.

This behavior is achieved through the use of specific attributes added to an assess_* function that requires dependencies listed in Suggests to function properly. These attributes are used by the logic in riskmetric::all_assessments to provide the "magic" behavior outlined above.

Handling calls to assess_* functions when package from Suggests is not present

The behavior described above shields end-users following the canonical riskmetric work flow (pkg_ref -> pkg_assess -> pkg_score) against missing dependency errors, but does not protect against issues that may arise when attempting to directly call a Suggests-backed assessment function (e.g. assess_security).

To facilitate a consistent end-user experience, a utility function (validate_suggests_install) has been added to check for the presence of a dependency package and prompt the user to install it if necessary. It is expected that the S3 generics for Suggests-backed assess_ and pkg_ref_cache.* will have a call to validate_suggests_install as the first line of their expression immediately followed by the UseMethod call for dispatch (see examples here and here).

There is also a mechanism to throttle the number of times a user is prompted to install a package. A package-level option should be set in the pattern of skip_{dependency_name}_install to ensure a user is only prompted to install the package once per session.

Implementation Consistency Checks

A unit test has been added to ensure additional Suggests-backed assessment functions and their associated metrics are added according to the design specification described above. There is a series of checks that confirm package-level options are set correctly, attributes are correctly set on the assessment_* generics, and that validate_suggests_install is appropriately called in the assessment_* and pkg_ref_cache.* generics. This test will need to be kept up to date with any changes to the design spec.