Skip to content

GoMLX: An Accelerated Machine Learning Framework For Go

License

Notifications You must be signed in to change notification settings

PreResearch-Labs/gomlx-ml-framework

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

GoMLX, an Accelerated ML and Math Framework

GoDev GitHub Go Report Card TestStatus Coverage

πŸ“– About GoMLX

GoMLX Gopher

GoMLX is a fast and easy-to-use set of Machine Learning and generic math libraries and tools. It can be seen as a PyTorch/Jax/TensorFlow for Go.

It can be used to train, fine-tune, modify and combine machine learning models. It provides all the tools to make that work easy: from a complete set of differentiable operators, all the way to UI tools to plot metrics while training in a notebook.

It's main backend engine based on OpenXLA/PJRT uses just-in-time compilation to CPU and GPU (optionally TPUs also). It's the same engine that powers Google's Jax and TensorFlow, and it has the same speed in many cases.

Tip

πŸŽ“ Quick Start:

It was developed to be full-featured ML platform for Go, and to easily experiment with ML ideas -- see Long-Term Goals below.

It strives to be simple to read and reason about, leading the user to a correct and transparent mental model of what is going on (no surprises) -- aligned with Go philosophy. At the cost of more typing (more verbose) at times.

It is also incredibly flexible, and easy to extend and try non-conventional ideas: use it to experiment with new optimizer ideas, complex regularizers, funky multi-tasking, etc.

Documentation is kept up-to-date (if it is not well documented, it is as if the code is not there) and error messages are useful (always with a stack-trace) and try to make it easy to solve issues.

GoMLX is still evolving, and should be considered experimental. There are occasional changes, but mostly it has just been adding new functionality.

πŸ—ΊοΈ Overview

GoMLX has many important components of an ML framework in place, from the bottom to the top of the stack. But it is still only a slice of what a major ML library/framework should provide (like TensorFlow, Jax or PyTorch).

Examples:

πŸš€ NEW πŸš€:

  • Converting ONNX models to GoMLX with onnx-gomlx: both as an alternative for onnxruntime (leveraging XLA), but also to further fine-tune models. See also go-huggingface to easily download ONNX model files from HuggingFace.
  • Support static linking of PJRT: slower to build the Go program, but deploying it doesn't require installing a PJRT plugin in the machine you are deploying it. Use go build --tags=pjrt_cpu_static or include import _ "github.com/gomlx/gomlx/backends/xla/cpu/static".
  • Experimental πŸš§πŸ›  support for MacOS (both Arm64 and the older x86_64) for CPU: only with static linking so far.

Highlights:

  • Pre-Trained models to use: InceptionV3 (image model) -- more to come.
  • Docker "gomlx_jupyterlab" with integrated JupyterLab and GoNB (a Go kernel for Jupyter notebooks)
  • Just-In-Time (JIT) compilation using OpenXLA for CPUs and GPUs -- hopefully soon TPUs.
  • Autograd: automatic differentiation -- only gradients for now, no jacobian.
  • Context: automatic variable management for ML models.
  • ML layers library with some of the most popular machine learning "layers": FFN layers,
    activation functions, layer and batch normalization, convolutions, pooling, dropout, Multi-Head-Attention (for transformer layers), LSTM, KAN (B-Splines, GR-KAN/KAT networks, Discrete-KAN, PiecewiseLinear KAN), PiecewiseLinear (for calibration and normalization), various regularizations, FFT (reverse/differentiable), learnable rational functions (both for activations and GR-KAN/KAT networks) etc.
  • Training library, with some pretty-printing. Including plots for Jupyter notebook, using GoNB, a Go Kernel.
    • Also, various debugging tools: collecting values for particular nodes for plotting, simply logging the value of nodes during training, stack-trace of the code where nodes are created.
  • SGD and Adam (AdamW and Adamax) optimizers.
  • Various losses and metrics.

πŸ‘₯ Support

  • Q&A and discussions
  • Issues
  • Random brainstorming on projects: just start a Q&A and I'm happy to meet in discord somewhere or VC.

πŸ› οΈ + βš™οΈ Installation

TLDR;: Two simple options:

(1) Use the Docker;

(2) Use pre-built binaries (C/C++ libraries) for Linux or MacOS (Darwin). See commands below, or more more details see
gopjrt installation instructions.

Linux/amd64, run (see source):

curl -sSf https://raw.githubusercontent.com/gomlx/gopjrt/main/cmd/install_linux_amd64.sh | bash

In addition, for Linux+CUDA (NVidia GPU) support, run (see source):

curl -sSf https://raw.githubusercontent.com/gomlx/gopjrt/main/cmd/install_cuda.sh | bash

Depending on what data formats you use, you may want to install hdf5-tools programs (sudo apt install hdf5-tools in Linux).

Darwin/arm64 and Darwin/amd64

Note

Currently, Darwin (MacOS) πŸš§πŸ›  it only works with statically linked PJR CPU plugin πŸ› πŸš§οΈ so that is the default (see issue in XLA's issue #19152 and on XLA's discord channels). Experimental.

For Arm64 (M1, M2, ... CPUs), run (see source)

curl -sSf https://raw.githubusercontent.com/gomlx/gopjrt/main/cmd/install_darwin_arm64.sh | bash

For Amd64 (i86_64), run (see source)

curl -sSf https://raw.githubusercontent.com/gomlx/gopjrt/main/cmd/install_darwin_amd64.sh | bash

The easiest to start playing with it, it's just pulling the docker image that includes GoMLX + JupyterLab + GoNB (a Go kernel for Jupyter) and Nvidia's CUDA runtime (for optional support of GPU) pre-installed -- it is ~5Gb to download.

From a directory you want to make visible in Jupyter, do:

For GPU support add the flag --gpus all to the docker run command bellow.

docker pull janpfeifer/gomlx_jupyterlab:latest
docker run -it --rm -p 8888:8888 -v "${PWD}":/home/jupyter/work janpfeifer/gomlx_jupyterlab:latest

It will display a URL starting with 127.0.0.1:8888 in the terminal (it will include a secret token needed) that you can open in your browser.

You can open and interact with the tutorial from there, it is included in the docker under the directory Projects/gomlx/examples/tutorial.

More details on the docker here.

🧭 Tutorial

See the tutorial here. It covers a bit of everything.

After that look at the demos in the examples/ directory.

The library itself is well documented (pls open issues if something is missing), and the code is not too hard to read. Godoc available in pkg.go.dev.

Finally, feel free to ask questions: time allowing (when not in work) I'm always happy to help -- I created groups.google.com/g/gomlx-discuss, or use GitHub discussions page.

Inference

Inference or serving a model is done currently by using the Go code used to create the model along with the checkpoint with the trained weights and hyperparameters used to train the model. In other words, it uses the same tools used for training.

For a simple example of how to do this and export a model inference as a library, see .../examples/cifar/classifer, and its use in the last cells of the Cifar-10 demo.

In the future we plan to also export models to ONNX or StableHLO and one could use tools that serve those.

🎯 Long-term Goals

  1. Building and training models in Go -- as opposed to Python (or some other language) -- with focus on:
    • Being simple to read and reason about, leading the user to a correct and transparent mental model of what is going on. Even if that means being more verbose when writing.
    • Clean, separable APIs: individual APIs should be self-contained and decoupled where possible.
    • Composability: Any component should be replaceable, so they can be customized and experimented. That means sometimes more coding (there is not one magic train object that does everything), but it makes it clear what is happening, and it's easy to replace parts with a third party versions or something custom.
    • Up-to-date documentation: if the documentation is not there or if it's badly written, it's as if the code was not there either.
    • Clear and actionable error reporting
  2. To be a productive research and educational platform to experiment with new ML ideas and learn.
    • Support mirrored training on multiple devices and various forms of distributed training (model and/or data parallelism) in particular to support for large language models and similarly large model training.
  3. To be a robust and reliable platform for production. Some sub-goals:
    • Support modern accelerator hardware like TPUs and GPUs.
    • Multiple backends beyond XLA, e.g: llamacpp, WebNN (with Wasm), pure Go version, etc.
    • Import pre-trained models from Hugging Face Hub and allow fine-tuning -- ONNX versions already working for many models in onnx-gomlx.
    • Compile models to binary as in C-libraries and/or WebAssembly, to be linked and consumed (inference) anywhere (any language).

FAQ

  • What are the environment variables are used by GoMLX ?
    • GOMLX_BACKEND: defines the backend engine to use (if using backend.New()). The value is formated as "<backend_name>:<backend_config>". Examples for XLA, the default engine: GOMLX_BACKEND="xla:cpu" (for CPU), GOMLX_BACKEND="xla:cuda" (for Nvidia CUDA) or GOMLX=BACKEND="xla:/path/to/my/pjrt_plugin.so" for some custom PJRT plugin.
    • PJRT_PLUGIN_LIBRARY_PATH: the underlying XLA backend uses this variable as an extra directory to search for plugin locations. It searches for the systems library paths ($LD_LIBRARY_PATH, /etc/ld.so.conf), the default /usr/local/lib/gomlx/pjrt and $PJRT_PLUGIN_LIBRARY_PATH if set.

🀝 Collaborating

The project is looking forward contributions for anyone interested. Many parts are not yet set in stone, so there is plenty of space for improvements and re-designs for those interested and with good experience in Go, Machine Learning and APIs in general. See the TODO file for inspiration.

No governance guidelines have been established yet.

πŸš€ Advanced Topics

βš–οΈ License

Copyright 2024 Jan Pfeifer

GoMLX is distributed under the terms of the Apache License Version 2.0. Unless it is explicitly stated otherwise, any contribution intentionally submitted for inclusion in this project shall be licensed under Apache License Version 2.0 without any additional terms or conditions.

About

GoMLX: An Accelerated Machine Learning Framework For Go

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 94.9%
  • Jupyter Notebook 4.8%
  • Other 0.3%