Skip to content

Releases: BerkeleyLab/fiats

Initial Release: Concurrent Inference Capability

22 Nov 02:08
a1089ce
Compare
Choose a tag to compare

This release provides an inference_engine_t type that encapsulates the state and behavior of a dense neural network with

  1. State:
    a. Weights and biases biases gathered into contiguous, multidimensional arrays, 🏋️
    b. Hidden layers with a uniform number of neurons. 🧠
  2. Behavior:
    a. A pure infer type-bound procedure that propagates input through the above architecture to produce output. ❄️
    b. An elemental interface for activation functions with one currently implemented: a step function. 🪜
    c. Runtime selection of inference method via the Strategy Pattern:
    • concurrent execution of dot_product intrinsic function invocations or
    • matmul intrinsic function invocations.
      d. Runtime selection of activation functions: currently only a step function is implemented to support the unit tests.
  3. Unit tests that
    a. Read and write neural networks to files, 📁
    b. Construct a network that implements an exclusive-or (XOR) gate in the first hidden layer followed by a second layer with weights described by the identity matrix so that the second layer serves a pass-through role. 🚧
  4. Examples
    a. Concurrent inference on multiple independent neural networks encapsulated in inference_engine_t objects. 🤔
    b. Construction and writing of a neural network to a file starting from user-defined weights and biases (useful for unit testing).
    c. Reading a neural network from a file and querying it for basic properties.
    d. Reading and writing from a NetCDF file (for future incorporation into the Inference-Engine library for purposes of reading validation data sets). 🥅

Because the infer procedure is pure, it can be called inside a do concurrent construct, which facilitates concurrent inference using multiple, independent neural networks.

Full Changelog: https://github.com/BerkeleyLab/inference-engine/commits/0.1.0