Skip to content

v2.0.0: the first release

Latest
Compare
Choose a tag to compare
@jmschrei jmschrei released this 06 Jan 14:05
· 4 commits to master since this release

Ledidi was originally developed in the final year of my Ph.D. as I was already partially out the door and so only made it into a Workshop on Computational Biology paper at ICML. Since then, several groups have used Ledidi in their work and had promising results. I have returned to Ledidi, providing a PyTorch implementation (the original implementation was in TensorFlow) with improved functionality, default settings, and investigations confirming that it makes realistic edits.

This release corresponds to the eventual submission of an updated manuscript and includes several important changes over the initial release announcement.

  • Uses PyTorch rather than TensorFlow as the backend
  • Better and more robust default have been found that significantly speed up the process, e.g., setting the learning rate to 1 rather than 0.01.
  • Early stopping has been added where learning terminates once the total loss does not decrease for more than a user-specified number of iterations
  • Allows custom input and output losses
  • The mixture weight lambda is now applied to the input loss rather than the output loss such that setting it to zero means not caring about the number of edits needed
  • The objective is explicitly to learn a continuous weight matrix rather than the more abstract objective from before
  • Blocking edits from being made at certain positions using a mask is now supported
  • In-painting is now supported by not calculating the input loss over positions that are unfilled in the original sequence.