This repository is a collection of useful jupyter
notebooks, code snippets and example JSON
files illustrating the use of Reinvent 3.2.
At the moment, the following notebooks are supported:
Complete_Use-Case_DRD2_Demo
: a full-fledged use case using public data onDRD2
, including use of predictive models and elucidating general considerationsCreate_Model_Demo
: explanation on how to initialize a new model (prior / agent) forREINVENT
which can be trained in a transfer learning setupData_Preparation
: tutorial on how to prepare (clean, filter and standardize) data from a source such asChEMBL
to be used for trainingModel_Building_Demo
: shows how to train a predictive (QSAR) model to be used withREINVENT
based on the publicDRD2
dataset (classification problem)Reinforcement_Learning_Demo
: example reinforcement learning run with a selection of scoring function components to generate novel compounds with ever higher scores iterativelyReinforcement_Learning_Demo_Selectivity
: example illustrating the use of the relatively complicatedselectivity_component
to optimize potency against a target while simultaneously pushing for a low potency against one or more off-targetsReinforcement_Learning_Demo_Tanimoto
: very simple (only 1, easy-to-understand component) transfer learning exampleReinforcement_Learning_Exploitation_Demo
: illustrates the exploitation scenario, where one is after solutions from a subspace in chemical space already well definedReinforcement_Learning_Exploration_Demo
: illustrates the exploration scenario, where the aim is to generate a varied set of solutions to a less stringently defined problemReinforcement_Learning_Demo_DockStream
: illustrates the use ofDockStream
in REINVENT, allowing the generative model to gradually optimize the docking score of proposed compounds. For more information onDockStream
, please see theDockStream
repository and the correspondingDockStreamCommunity
repository for tutorial notebooks onDockStream
as a standalone molecular docking tool.Reinforcement_Learning_Demo_Icolos
: illustrates the use of Icolos in REINVENT using a docking scenario.Sampling_Demo
: once an agent has been trained and is producing interesting results, it can be used to generate more compounds without actually changing it further - this is facilitated by thesampling mode
Score_Transformations
: as many components produce scores on an arbitrary scale, butREINVENT
needs to receive it normalized to be a number between 0 and 1 (with values close to 1 meaning "good"), score transformations have been implemented and can be used as shown in this tutorialScoring_Demo
: in case a set of existing compound definitions (for example prior to starting a project) should be scored with a scoring function definition, thescoring mode
can be usedTransfer_Learning_Demo
: this tutorial illustrates thetransfer learning
mode, which usually is used to "pre-train" an agent beforereinforcement learning
in case no adequate naive prior is available or to focus an already existing agent furtherTransfer_Learning_Demo_Teachers_Forcing
: same asTransfer_Learning_Demo
above, with explanation ofteachers forcing
Lib-INVENT_RL1_QSAR
: Lib-INVENT example reinforcement learning run using a QSAR modelLib-INVENT_RL2_QSAR_RF
: Lib-INVENT example reinforcement learning run using a random forest (RF) QSAR modelLib-INVENT_RL3_ROCS_RF
: Lib-INVENT example reinforcement learning using OpenEye's ROCS 3D similarity (requires an OpenEye license)Link-INVENT_RL
: Link-INVENT example reinforcement learningAutomated_Curriculum_Learning_demo
: illustrates the automated curriculum learning running model. The example demonstrates how to set-up a curriculum to guide the REINVENT agent to sample a target molecular scaffold. This scenario represents a complex objective as the target scaffold is not present in the training set for the prior model