Skip to content

Releases: OasisLMF/ktools

Release v3.12.4

18 Sep 11:01
Compare
Choose a tag to compare

ktools Changelog - v3.12.4

  • #387 - Bump actions/download-artifact from 3 to 4.1.7 in /.github/workflows
  • #390 - Average Loss Convergence Table (ALCT) fix
  • #391 - Fix GH actions artifact

ktools Notes

Average Loss Convergence Table (ALCT) fix - (PR #390)

  • Changed the calculation of StandardError which is used for the confidence intervals for the AAL estimate to be equal to StandardErrorVuln from the ANOVA metrics
  • Reintroduced the ANOVA fields from the first version of the report
  • Set the lower and upper confidence threshold to zero where StandardError=0 (for SampleSize 1)

Release v3.12.3

30 May 11:55
Compare
Choose a tag to compare

ktools Changelog - v3.12.3

  • #382 - footprinttocsv produces no output for large event IDs
  • #383 - Event ID should not exceed MAX_INT when converting footprint file from csv to binary
  • #379 - Release 3.12.2

ktools Notes

Remove default range for event IDs in footprinttocsv - (PR #384)

With the -e flag, a user is able to specify a range of event IDs to output when converting footprint binary files to csv using footprinttocsv. Should a range not have been entered, a default was used. This default had minimum and maximum limits of 1 and 999,999,999 respectively. As event IDs that are greater than this maximum limit should be valid, the range has been removed. Therefore, if a range is not requested by the user, all event IDs are now converted from binary to csv and sent to the output.

Introduce maximum Event ID and Areaperil ID validation checks on footprint files - (PR #385)

Validation checks have been introduced in the footprinttobin and validatefootprint components to ensure that the Event ID and Areaperil ID do not exceed the maximum integer value. To facilitate this, the method Validate::CheckIDDoesNotExceedMaxLimit() has been created, and allows for similar checks on other IDs to be introduced in the future if required.

Release v3.12.2

29 Apr 11:02
Compare
Choose a tag to compare

ktools Changelog - v3.12.2

  • #375 - Update documentation for recent changes in validation tools
  • #377 - Allow number of affected risks (sample index = -4) to be processed by summarycalctocsv
  • #380 - Pin OSX builds to v13

ktools Notes

Documentation updated to reflect changes to model file validation checks - (PR #376)

Recent changes to the model file validation checks include the incorporation of these checks into the csv to binary conversion tools by default (see PR #370) and additional model file validation checks (see PR #373). The documentation has been updated to reflect these changes.

Allow number of affected risks to be processed by summarycalctocsv - (PR #378)

Sample index (sidx) -4 has been reassigned to represent the number of affected risks. Therefore, filters on sidx = -4 have been removed in summarycalctocsv, allowing for the number of affected risks to be included in the output csv file.

Pin OSX builds to v13 - (PR #380)

Build system workaround for #381

Release v3.12.1

26 Mar 08:42
Compare
Choose a tag to compare
  • #372 - Additional model file validation checks

ktools Notes

Additional model file validation checks - (PR #373)

Two additional model file validation checks have been incorporated:

  1. In the vulnerability file, damage bins indices must be contiguous and the first bin must have an ID of 1. This affects validatevulnerability and vulnerabilitytobin. The check for duplicate damage bin indices has been removed as it is no longer required.
  2. In the damage bin dictionary file, warnings are produced should the interpolation value not lie in the bin centre. This affects validatedamagebin and damagebintobin.

Release v3.12.0

01 Feb 09:29
Compare
Choose a tag to compare

ktools Changelog - v3.12.0

  • #369 - Release 3.11.1
  • #356 - Incorporate model file validation checks when running csv to binary conversion tools

ktools Notes

Incorporate model file validation checks into csv to binary conversion tools - (PR #370)

Validation checks for damage bin dictionary, footprint and vulnerability model files have been incorporated into their respective csv to binary conversion tools, and are run by default. These checks can be suppressed with the -N command line flag. For example:

# Convert to binary file and suppress validation checks
$ damagebintobin -N < damage_bin_dict.csv > damage_bin_dict.bin
$ footprinttobin -N -i 121 -b footprint.bin -x footprint.idx < footprint.csv
$ vulnerabilitytobin -N -d 102 < vulnerability.csv > vulnerability.bin

# Convert to binary file and execute validation checks
$ damagebintobin < damage_bin_dict.csv > damage_bin_dict.bin
$ footprinttobin -i 121 -b footprint.bin -x footprint.idx < footprint.csv
$ vulnerabilitytobin -d 102 < vulnerability.csv > vulnerability.bin

# Execute validation checks
$ validatedamagebin < damage_bin_dict.csv
$ validatefootprint < footprint.csv
$ validatevulnerability < vulnerability.csv

Additionally, the -S command line flag has been introduced to validatevulnerability and vulnerabilitytobin to suppress warnings when all intensity bins are not present each vulnerability ID. It is recommended to suppress these warnings for multiple peril models:

$ validatevulnerability -S < vulnerability.csv
WARNING: Vulnerability ID 1: Intensity bin 4 missing. All intensity bins must be present for each vulnerability ID in single peril models.
WARNING: Vulnerability ID 2: Intensity bins 1 and 2 missing. All intensity bins must be present for each vulnerability ID in single peril models.
INFO: All Vulnerability file validation checks have passed. Please take note of any warnings.

Release v3.11.1

11 Dec 12:11
Compare
Choose a tag to compare
  • #366 - pltcalc -nan in standard deviation when sample size = 1

Release v3.11.0

08 Nov 15:56
Compare
Choose a tag to compare
  • #353 - Add runtime user supplied secondary factor option to placalc
  • #342 - aalcalc Performance Improvements
  • #304 - CALT estimated standard error in AAL overstates observed sampling error
  • #359 - CSV to BIN conversion tool for aggregate weights and vulnerability definitions.

Release v3.10.1

06 Oct 09:48
Compare
Choose a tag to compare

ktools Changelog - v3.10.1

  • #353 - Add runtime user supplied secondary factor option to placalc
  • #342 - aalcalc Performance Improvements

ktools Notes

Add runtime user-supplied relative, secondary factor option to placalc - (PR #354)

An optional, relative flat-factor can be specified by the user and applied to all loss factors with the command line argument -f. For example, to apply a relative secondary factor of 0.8 the following can be entered:

$ placalc -f 0.8 < gulcalc_output.bin > placalc_output.bin

The relative secondary factor must lie within the range [0, 1] and is applied to the deviation of the factor from 1. For example:

event_id factor from model relative factor from user applied factor
1 1.10 0.8 1.08
2 1.20 0.8 1.16
3 1.00 0.8 1.00
4 0.90 0.8 0.92

Add runtime user-supplied absolute, uniform factor option to placalc

Alternatively, an absolute, uniform post loss amplification/reduction factor can be applied to all losses by the user with the command line argument -F. For example, to specify a uniform factor of 0.8 across all losses, the following can be entered:

$ placalc -F 0.8 < gulcalc_output.bin > placalc_output.bin

If specified, the loss factors from the model (those in lossfactors.bin) are ignored. This factor must be positive and is applied uniformly across all losses. For example:

event_id factor from model uniform factor from user applied factor
1 1.10 0.8 0.8
2 1.20 0.8 0.8
3 1.00 0.8 0.8
4 0.90 0.8 0.8

The absolute, uniform factor is incompatible with the relative, secondary factor given above. Therefore, if both are given by the user, a warning is issued and the relative, secondary factor is ignored:

$ placalc -f 0.8 -F 0.8 < gulcalc_output.bin > placalc_output
WARNING: Relative secondary and absolute factors are incompatible
INFO: Ignoring relative secondary factor

Add tests for Post Loss Amplification (PLA) components

Acceptance tests for placalc, amplificationstobin, amplificationstocsv, lossfactorstobin and lossfactorstocsv have been included.

New component aalcalcmeanonly - (PR #357)

A new component aalcalcmeanonly calculates the overall average period loss. Unlike aalcalc, it does not calculate the standard deviation from the average. Therefore, it has a quicker execution time and uses less memory.

Release v3.10.0

13 Jul 11:59
Compare
Choose a tag to compare
  • #351 - Introduce components for Post Loss Amplification

ktools Notes

Introduce components for Post Loss Amplification - (PR #351)

Major events can give rise to inflated costs as a result of the shortage of labour, materials, and other factors. Conversely, in some cases the costs incurred may be lower as the main expenses may be shared amongst the sites that are hit in the same area. To account for this, the ground up losses from gulpy/gulcalc are multiplied by post loss amplification factors by a new component placalc.

Five components are introduced:

  • amplificationstobin
  • amplificationstocsv
  • lossfactorstobin
  • lossfactorstocsv
  • placalc

The file static/lossfactors.bin maps event ID-amplification ID pairs with post loss amplification factors, and is supplied by the model provider. The components amplificationstobin and amplificationstocsv convert this file between csv and binary formats. The binary format for this file is defined as follows:

  • the first 4 bytes are reserved for future use
  • event ID (4-byte integer)
  • number of amplification IDs associated with the aforementioned event ID (4-byte integer)
  • amplification ID (4-byte integer)
  • loss factor (4-byte float)

This is then followed by all the amplification ID-loss factor pairs associated with the event ID. Then the next event ID is given.

The file input/amplifications.bin maps item IDs to amplification IDs. Keys with amplification IDs are generated by the OasisLMF (MDK) key server according to the strategy given by the model provider. These are used to generate the amplifications file. The components amplificationstobin and amplificationstocsv convert this file between csv and binary formats. The binary format for this file is defined as follows:

  • the first 4 bytes are reserved for future use.
  • item ID (4-byte integer)
  • amplification ID (4-byte integer)

The component placalc uses the files static/lossfactors.bin and input/amplifications.bin to assign loss factors to event ID-item ID pairs from gulpy/gulcalc. Losses are then multiplied by their relevant factors. The output format is identical to that of gulpy/gulcalc: event ID; item ID; sample ID (sidx); and loss.

Release v3.9.8

08 Jun 09:56
Compare
Choose a tag to compare

ktools Changelog - v3.9.8

  • #344 - Incorrect Values from Wheatsheaf/Per Sample Mean with Period Weights in leccalc/ordleccalc

ktools Notes

Fix Per Sample Mean (Wheatsheaf Mean) with Period Weights Output from leccalc/ordleccalc - (PR #349)

When a period weights file was supplied by the user, the Per Sample Mean (i.e. Wheatsheaf Mean) from leccalc and ordleccalc was incorrect. After sorting the loss vector in descending order, the vector was then reorganised by period number, nullifying the sorting. This would only yield the correct results in the very rare cases when the loss value decreased with increasing period number.

As the return periods are determined by the period weights, in order to calculate the mean losses, the data would need to traversed twice: once to determine the return periods; and the second time to fill them. However, if the return periods are known in advance, i.e. when the user supplies a return period file, the first iteration is unnecessary.

As the per sample mean with period weights does not appear to be a very useful metric, this option is only supported when a return periods file is present. Should a return periods file be missing, the following message will be written to the log file:

WARNING: Return periods file must be present if you wish to use non-uniform period weights for Wheatsheaf mean/per sample mean output.
INFO: Wheatsheaf mean/per sample mean output will not be produced.

As outlined above, the per sample mean (i.e. Wheatsheaf mean) will not be calculated and written out.

The Tail Value at Risk (TVaR) is only meaningful if the losses for all possible return periods are generated. Therefore, it has been dropped from the ordleccalc output with the following message written to the log file:

INFO: Tail Value at Risk for Wheatsheaf mean/per sample mean is not supported if you wish to use non-uniform period weights.

The decision to drop support can be revisited should the aforementioned metrics be deemed useful by the community.