Skip to content

Latest commit

 

History

History
119 lines (79 loc) · 5.8 KB

README.md

File metadata and controls

119 lines (79 loc) · 5.8 KB

Build Status PyPI version Documentation Gitter

OpenNMT-tf

OpenNMT-tf is a general purpose sequence learning toolkit using TensorFlow. While neural machine translation is the main target task, it has been designed to more generally support:

  • sequence to sequence mapping
  • sequence tagging
  • sequence classification
  • language modeling

The project is production-oriented and comes with backward compatibility guarantees.

Key features

OpenNMT-tf focuses on modularity to support advanced modeling and training capabilities:

  • arbitrarily complex encoder architectures
    e.g. mixing RNNs, CNNs, self-attention, etc. in parallel or in sequence.
  • hybrid encoder-decoder models
    e.g. self-attention encoder and RNN decoder or vice versa.
  • neural source-target alignment
    train with guided alignment to constrain attention vectors and output alignments as part of the translation API.
  • multi-source training
    e.g. source text and Moses translation as inputs for machine translation.
  • multiple input format
    text with support of mixed word/character embeddings or real vectors serialized in TFRecord files.
  • on-the-fly tokenization
    apply advanced tokenization dynamically during the training and detokenize the predictions during inference or evaluation.
  • domain adaptation
    specialize a model to a new domain in a few training steps by updating the word vocabularies in checkpoints.
  • automatic evaluation
    support for saving evaluation predictions and running external evaluators (e.g. BLEU).
  • mixed precision training
    take advantage of the latest NVIDIA optimizations to train models with half-precision floating points.

and all of the above can be used simultaneously to train novel and complex architectures. See the predefined models to discover how they are defined and the API documentation to customize them.

OpenNMT-tf is also compatible with some of the best TensorFlow features:

Usage

OpenNMT-tf requires:

  • Python >= 2.7
  • TensorFlow >= 1.4, < 2.0

We recommend installing it with pip:

pip install OpenNMT-tf

See the documentation for more information.

Command line

OpenNMT-tf comes with several command line utilities to prepare data, train, and evaluate models.

For all tasks involving a model execution, OpenNMT-tf uses a unique entrypoint: onmt-main. A typical OpenNMT-tf run consists of 3 elements:

  • the run type: train_and_eval, train, eval, infer, export, or score
  • the model type
  • the parameters described in a YAML file

that are passed to the main script:

onmt-main <run_type> --model_type <model> --auto_config --config <config_file.yml>

For more information and examples on how to use OpenNMT-tf, please visit our documentation.

Library

OpenNMT-tf also exposes well-defined and stable APIs. Here is an example using the library to encode a sequence using a self-attentional encoder:

import tensorflow as tf
import opennmt as onmt

tf.enable_eager_execution()

# Build a random batch of input sequences.
inputs = tf.random.uniform([3, 6, 256])
sequence_length = tf.constant([4, 6, 5], dtype=tf.int32)

# Encode with a self-attentional encoder.
encoder = onmt.encoders.SelfAttentionEncoder(num_layers=6)
outputs, _, _ = encoder.encode(
    inputs,
    sequence_length=sequence_length,
    mode=tf.estimator.ModeKeys.TRAIN)

print(outputs)

For more advanced examples, some online resources are using OpenNMT-tf as a library:

  • The directory examples/library contains additional examples that use OpenNMT-tf as a library
  • OpenNMT Hackathon 2018 features a tutorial to implement unsupervised NMT using OpenNMT-tf
  • nmt-wizard-docker uses the high-level onmt.Runner API to wrap OpenNMT-tf with a custom interface for training, translating, and serving

For a complete overview of the APIs, see the package documentation.

Compatibility with {Lua,Py}Torch implementations

OpenNMT-tf has been designed from scratch and compatibility with the {Lua,Py}Torch implementations in terms of usage, design, and features is not a priority. Please submit a feature request for any missing feature or behavior that you found useful in the {Lua,Py}Torch implementations.

Acknowledgments

The implementation is inspired by the following:

Additional resources