Releases: BerkeleyLab/fiats
Releases · BerkeleyLab/fiats
Initial Release: Concurrent Inference Capability
This release provides an inference_engine_t
type that encapsulates the state and behavior of a dense neural network with
- State:
a. Weights and biases biases gathered into contiguous, multidimensional arrays, 🏋️
b. Hidden layers with a uniform number of neurons. 🧠 - Behavior:
a. Apure
infer
type-bound procedure that propagates input through the above architecture to produce output. ❄️
b. Anelemental
interface for activation functions with one currently implemented: a step function. 🪜
c. Runtime selection of inference method via the Strategy Pattern:- concurrent execution of
dot_product
intrinsic function invocations or matmul
intrinsic function invocations.
d. Runtime selection of activation functions: currently only a step function is implemented to support the unit tests.
- concurrent execution of
- Unit tests that
a. Read and write neural networks to files, 📁
b. Construct a network that implements an exclusive-or (XOR) gate in the first hidden layer followed by a second layer with weights described by the identity matrix so that the second layer serves a pass-through role. 🚧 - Examples
a. Concurrent inference on multiple independent neural networks encapsulated ininference_engine_t
objects. 🤔
b. Construction and writing of a neural network to a file starting from user-defined weights and biases (useful for unit testing).
c. Reading a neural network from a file and querying it for basic properties.
d. Reading and writing from a NetCDF file (for future incorporation into the Inference-Engine library for purposes of reading validation data sets). 🥅
Because the infer
procedure is pure
, it can be called inside a do concurrent
construct, which facilitates concurrent inference using multiple, independent neural networks.
Full Changelog: https://github.com/BerkeleyLab/inference-engine/commits/0.1.0