diff --git a/HISTORY.md b/HISTORY.md new file mode 100644 index 00000000..d7e1e2c7 --- /dev/null +++ b/HISTORY.md @@ -0,0 +1,9 @@ +# Breaking updates and feature summaries across releases + +## JumpProcesses unreleased (master branch) +- Support for "bounded" `VariableRateJump`s that can be used with the `Coevolve` + aggregator for faster simulation of jump processes with time-dependent rates. + In particular, if all `VariableRateJump`s in a pure-jump system are bounded one + can use `Coevolve` with `SSAStepper` for better performance. See the + documentation, particularly the first and second tutorials, for details on + defining and using bounded `VariableRateJump`s. \ No newline at end of file diff --git a/docs/make.jl b/docs/make.jl index 71aad1bc..0502a960 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -1,7 +1,9 @@ using Documenter, JumpProcesses -cp("./docs/Manifest.toml", "./docs/src/assets/Manifest.toml", force = true) -cp("./docs/Project.toml", "./docs/src/assets/Project.toml", force = true) +docpath = Base.source_dir() +assetpath = joinpath(docpath, "src", "assets") +cp(joinpath(docpath, "Manifest.toml"), joinpath(assetpath, "Manifest.toml"), force = true) +cp(joinpath(docpath, "Project.toml"), joinpath(assetpath, "Project.toml"), force = true) include("pages.jl") diff --git a/docs/src/api.md b/docs/src/api.md index 7bebbfe8..1a62677c 100644 --- a/docs/src/api.md +++ b/docs/src/api.md @@ -15,13 +15,16 @@ reset_aggregated_jumps! ConstantRateJump MassActionJump VariableRateJump +RegularJump JumpSet ``` ## Aggregators Aggregators are the underlying algorithms used for sampling -[`MassActionJump`](@ref)s and [`ConstantRateJump`](@ref)s. +[`ConstantRateJump`](@ref)s, [`MassActionJump`](@ref)s, and +[`VariableRateJump`](@ref)s. ```@docs +Coevolve Direct DirectCR FRM @@ -36,4 +39,4 @@ SortingDirect ```@docs ExtendedJumpArray SSAIntegrator -``` \ No newline at end of file +``` diff --git a/docs/src/faq.md b/docs/src/faq.md index ba569e51..368e2d9c 100644 --- a/docs/src/faq.md +++ b/docs/src/faq.md @@ -1,16 +1,20 @@ # FAQ ## My simulation is really slow and/or using a lot of memory, what can I do? -To reduce memory use, use `save_positions=(false,false)` in the `JumpProblem` -constructor as described [earlier](@ref save_positions_docs) to turn off saving -the system state before and after every jump. Combined with use of `saveat` in -the call to `solve` this can dramatically reduce memory usage. +Exact methods simulate every jump, and by default save the state before and +after each jump. To reduce memory use, use `save_positions = (false, false)` in +the `JumpProblem` constructor as described [earlier](@ref save_positions_docs) +to turn off saving the system state before and after every jump. Combined with +use of `saveat` in the call to `solve`, to specify the specific times at which +to save the state, this can dramatically reduce memory usage. While `Direct` is often fastest for systems with 10 or less `ConstantRateJump`s -or `MassActionJump`s, if your system has many jumps or one jump occurs most -frequently, other stochastic simulation algorithms may be faster. See [Constant -Rate Jump Aggregators](@ref) and the subsequent sections there for guidance on -choosing different SSAs (called aggregators in JumpProcesses). +and/or `MassActionJump`s, if your system has many jumps or one jump occurs most +frequently, other stochastic simulation algorithms may be faster. See [Jump +Aggregators for Exact Simulation](@ref) and the subsequent sections there for +guidance on choosing different SSAs (called aggregators in JumpProcesses). For +systems with bounded `VariableRateJump`s using `Coevolve` with `SSAStepper` +instead of an ODE/SDE time stepper can give a significant performance boost. ## When running many consecutive simulations, for example within an `EnsembleProblem` or loop, how can I update `JumpProblem`s? @@ -22,8 +26,9 @@ internal aggregators for each new parameter value or initial condition. ## How can I define collections of many different jumps and pass them to `JumpProblem`? We can use `JumpSet`s to collect jumps together, and then pass them into -`JumpProblem`s directly. For example, using the `MassActionJump` and -`ConstantRateJump` defined earlier we can write +`JumpProblem`s directly. For example, using a `MassActionJump` and +`ConstantRateJump` defined in the [second tutorial](@ref ssa_tutorial), we can +write ```julia jset = JumpSet(mass_act_jump, birth_jump) @@ -42,8 +47,8 @@ vj1 = VariableRateJump(rate3, affect3!) vj2 = VariableRateJump(rate4, affect4!) vjtuple = (vj1, vj2) -jset = JumpSet(; constant_jumps=cjvec, variable_jumps=vjtuple, - massaction_jumps=mass_act_jump) +jset = JumpSet(; constant_jumps = cjvec, variable_jumps = vjtuple, + massaction_jumps = mass_act_jump) ``` ## How can I set the random number generator used in the jump process sampling algorithms (SSAs)? @@ -66,16 +71,19 @@ default. On versions below 1.7 it uses `Xoroshiro128Star`. ## What are these aggregators and aggregations in JumpProcesses? JumpProcesses provides a variety of methods for sampling the time the next -`ConstantRateJump` or `MassActionJump` occurs, and which jump type happens at -that time. These methods are examples of stochastic simulation algorithms -(SSAs), also known as Gillespie methods, Doob's method, or Kinetic Monte Carlo -methods. In the JumpProcesses terminology we call such methods "aggregators", and -the cache structures that hold their basic data "aggregations". See [Constant -Rate Jump Aggregators](@ref) for a list of the available SSA aggregators. +`ConstantRateJump`, `MassActionJump`, or `VariableRateJump` occurs, and which +jump type happens at that time. These methods are examples of stochastic +simulation algorithms (SSAs), also known as Gillespie methods, Doob's method, or +Kinetic Monte Carlo methods. These are all names for jump (or point) processes +simulation methods used across the biology, chemistry, engineering, mathematics, +and physics literature. In the JumpProcesses terminology we call such methods +"aggregators", and the cache structures that hold their basic data +"aggregations". See [Jump Aggregators for Exact Simulation](@ref) for a list of +the available SSA aggregators. ## How should jumps be ordered in dependency graphs? Internally, JumpProcesses SSAs (aggregators) order all `MassActionJump`s first, -then all `ConstantRateJumps`. i.e. in the example +then all `ConstantRateJumps` and/or `VariableRateJumps`. i.e. in the example ```julia using JumpProcesses @@ -99,15 +107,15 @@ The four jumps would be ordered by the first jump in `maj`, the second jump in `maj`, `cj1`, and finally `cj2`. Any user-generated dependency graphs should then follow this ordering when assigning an integer id to each jump. -See also [Constant Rate Jump Aggregators Requiring Dependency Graphs](@ref) for +See also [Jump Aggregators Requiring Dependency Graphs](@ref) for more on dependency graphs needed for the various SSAs. -## How do I use callbacks with `ConstantRateJump` or `MassActionJump` systems? +## How do I use callbacks with jump simulations? -Callbacks can be used with `ConstantRateJump`s and `MassActionJump`s. When -solving a pure jump system with `SSAStepper`, only discrete callbacks can be -used (otherwise a different time stepper is needed). When using an ODE or SDE -time stepper any callback should work. +Callbacks can be used with `ConstantRateJump`s, `MassActionJump`s, and +`VariableRateJump`s. When solving a pure jump system with `SSAStepper`, only +discrete callbacks can be used (otherwise a different time stepper is needed). +When using an ODE or SDE time stepper any callback should work. *Note, when modifying `u` or `p` within a callback, you must call [`reset_aggregated_jumps!`](@ref) after making updates.* This ensures that the diff --git a/docs/src/index.md b/docs/src/index.md index aa881a26..fbbbdf97 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -1,9 +1,10 @@ # JumpProcesses.jl: Stochastic Simulation Algorithms for Jump Processes, Jump-ODEs, and Jump-Diffusions JumpProcesses.jl, formerly DiffEqJump.jl, provides methods for simulating jump -processes, known as stochastic simulation algorithms (SSAs), Doob's method, -Gillespie methods, or Kinetic Monte Carlo methods across different fields of -science. It also enables the incorporation of jump processes into hybrid -jump-ODE and jump-SDE models, including jump diffusions. +(or point) processes. Across different fields of science such methods are also +known as stochastic simulation algorithms (SSAs), Doob's method, Gillespie +methods, or Kinetic Monte Carlo methods . It also enables the incorporation of +jump processes into hybrid jump-ODE and jump-SDE models, including jump +diffusions. JumpProcesses is a component package in the [SciML](https://sciml.ai/) ecosystem, and one of the core solver libraries included in @@ -78,20 +79,21 @@ versioninfo() # hide ``` ```@example using Pkg # hide -Pkg.status(;mode = PKGMODE_MANIFEST) # hide +Pkg.status(; mode = PKGMODE_MANIFEST) # hide ``` ```@raw html ``` ```@raw html -You can also download the +You can also download the manifest file and the @@ -99,10 +101,11 @@ link = "https://github.com/SciML/"*name*".jl/tree/gh-pages/v"*version*"/assets/M ``` ```@eval using TOML -version = TOML.parse(read("../../Project.toml",String))["version"] -name = TOML.parse(read("../../Project.toml",String))["name"] -link = "https://github.com/SciML/"*name*".jl/tree/gh-pages/v"*version*"/assets/Project.toml" +projtoml = joinpath("..", "..", "Project.toml") +version = TOML.parse(read(projtoml, String))["version"] +name = TOML.parse(read(projtoml, String))["name"] +link = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version * "/assets/Project.toml" ``` ```@raw html ">project file. -``` \ No newline at end of file +``` diff --git a/docs/src/jump_solve.md b/docs/src/jump_solve.md index eedf163e..9c6f0461 100644 --- a/docs/src/jump_solve.md +++ b/docs/src/jump_solve.md @@ -6,19 +6,37 @@ solve(prob::JumpProblem,alg;kwargs) ## Recommended Methods -A `JumpProblem(prob,aggregator,jumps...)` comes in two forms. The first major -form is if it does not have a `RegularJump`. In this case, it can be solved with -any integrator on `prob`. However, in the case of a pure `JumpProblem` (a -`JumpProblem` over a `DiscreteProblem`), there are special algorithms -available. The `SSAStepper()` is an efficient streamlined algorithm for running -the `aggregator` version of the SSA for pure `ConstantRateJump` and/or -`MassActionJump` problems. However, it is not compatible with event handling. If -events are necessary, then `FunctionMap` does well. - -If there is a `RegularJump`, then specific methods must be used. The current -recommended method is `TauLeaping` if you need adaptivity, events, etc. If you -just need the most barebones fixed time step leaping method, then `SimpleTauLeaping` -can have performance benefits. +`JumpProblem`s can be solved with two classes of methods, exact and inexact. +Exact algorithms currently sample realizations of the jump processes in +chronological order, executing individual jumps sequentially at randomly sampled +times. In contrast, inexact (τ-leaping) methods are time-step based, executing +multiple occurrences of jumps during each time-step. These methods can be much +faster as they only simulate the total number of jumps over each leap interval, +and thus do not need to simulate the realization of every single jump. Jumps for +use with exact simulation methods can be defined as `ConstantRateJump`s, +`MassActionJump`s, and/or `VariableRateJump`. Jumps for use with inexact +τ-leaping methods should be defined as `RegularJump`s. + +There are special algorithms available for efficiently simulating an exact, pure +`JumpProblem` (i.e. a `JumpProblem` over a `DiscreteProblem`). `SSAStepper()` +is an efficient streamlined integrator for time stepping such problems from +individual jump to jump. This integrator is named after Stochastic Simulation +Algorithms (SSAs), commonly used naming in chemistry and biology applications +for the class of exact jump process simulation algorithms. In turn, we denote by +"aggregators" the algorithms that `SSAStepper` calls to calculate the next jump +time and to execute a jump (i.e. change the system state appropriately). All +JumpProcesses aggregators can be used with `ConstantRateJump`s and +`MassActionJump`s, with a subset of aggregators also working with bounded + `VariableRateJump`s (see [the first tutorial](@ref poisson_proc_tutorial) for +the definition of bounded `VariableRateJump`s). Although `SSAStepper()` is +usually faster, it only supports discrete events (`DiscreteCallback`s), for pure +jump problems requiring continuous events (`ContinuousCallback`s) the less +performant `FunctionMap` time-stepper can be used. + +If there is a `RegularJump`, then inexact τ-leaping methods must be used. The +current recommended method is `TauLeaping` if one needs adaptivity, events, etc. +If ones only needs the most barebones fixed time-step leaping method, then +`SimpleTauLeaping` can have performance benefits. ## Special Methods for Pure Jump Problems @@ -28,9 +46,10 @@ algorithms are optimized for pure jump problems. ### JumpProcesses.jl -- `SSAStepper`: a stepping algorithm for pure `ConstantRateJump` and/or - `MassActionJump` `JumpProblem`s. Supports handling of `DiscreteCallback` - and saving controls like `saveat`. +- `SSAStepper`: a stepping integrator for `JumpProblem`s defined over + `DiscreteProblem`s involving `ConstantRateJump`s, `MassActionJump`s, and/or + bounded `VariableRateJump`s . Supports handling of `DiscreteCallback`s and + saving controls like `saveat`. ## RegularJump Compatible Methods diff --git a/docs/src/jump_types.md b/docs/src/jump_types.md index 6cc8a200..1590e47e 100644 --- a/docs/src/jump_types.md +++ b/docs/src/jump_types.md @@ -1,97 +1,129 @@ # [Jump Problems](@id jump_problem_type) -### Mathematical Specification of an problem with jumps +## Mathematical Specification of a problem with jumps -Jumps are defined as a Poisson process which changes states at some `rate`. When -there are multiple possible jumps, the process is a compound Poisson process. On its -own, a jump equation is a continuous-time Markov Chain where the time to the -next jump is exponentially distributed as calculated by the rate. This type of -process, known in biology as "Gillespie discrete stochastic simulations" and -modeled by the Chemical Master Equation (CME), is the same thing as adding jumps -to a `DiscreteProblem`. However, any differential equation can be extended by jumps -as well. For example, we have an ODE with jumps, denoted by +Jumps (or point) processes are stochastic processes with discrete state changes +driven by a `rate` function. The homogeneous Poisson process is the canonical +point process with a constant rate of change. Processes involving multiple jumps +are known as compound jump (or point) processes. + +A compound Poisson process is a continuous-time Markov Chain where the time to +the next jump is exponentially distributed as determined by the rate. Simulation +algorithms for these types of processes are known in biology and chemistry as +Gillespie methods or Stochastic Simulation Algorithms (SSA), with the time +evolution that the probability these processes are in a given state at a given +time satisfying the Chemical Master Equation (CME). In the statistics literature, +the composition of Poisson processes is described by the superposition theorem. + +Any differential equation can be extended by jumps. For example, we have an ODE +with jumps, denoted by ```math \frac{du}{dt} = f(u,p,t) + \sum_{i}c_i(u,p,t)p_i(t) ``` -where ``p_i`` is a Poisson counter of rate ``\lambda_i(u,p,t)``. -Extending a stochastic differential equation to have jumps is commonly known as a Jump +where ``p_i`` is a Poisson counter of rate ``\lambda_i(u,p,t)``. Extending +a stochastic differential equation to have jumps is commonly known as a Jump Diffusion, and is denoted by ```math -du = f(u,p,t)dt + \sum_{j}g_j(u,t)dW_j(t) + \sum_{i}c_i(u,p,t)dp_i(t) +du(t) = f(u,p,t)dt + \sum_{j}g_j(u,t)dW_j(t) + \sum_{i}c_i(u,p,t)dp_i(t) ``` -## Types of Jumps: Regular, Variable, Constant Rate and Mass Action - -A `RegularJump` is a set of jumps that do not make structural changes to the -underlying equation. These kinds of jumps only change values of the dependent -variable (`u`) and thus can be treated in an inexact manner. Other jumps, such -as those which change the size of `u`, require exact handling which is also -known as time-adaptive jumping. These can only be specified as a -`ConstantRateJump`, `MassActionJump`, or a `VariableRateJump`. - -We denote a jump as variable rate if its rate function is dependent on values -which may change between constant rate jumps. For example, if there are multiple -jumps whose rates only change when one of them occur, than that set of jumps is -a constant rate jump. If a jump's rate depends on the differential equation, -time, or by some value which changes outside of any constant rate jump, then it -is denoted as variable. - -A `MassActionJump` is a specialized representation for a collection of constant -rate jumps that can each be interpreted as a standard mass action reaction. For -systems comprised of many mass action reactions, using the `MassActionJump` type -will offer improved performance. Note, only one `MassActionJump` should be -defined per `JumpProblem`; it is then responsible for handling all mass action +## Types of Jumps: Constant Rate, Mass Action, Variable Rate and Regular + +Exact jump process simulation algorithms tend to describe the realization of +each jump event chronologically. Individual jumps are usually associated with +changes to the state variable `u`, which in turn changes the `rate`s at which +jump events occur. These jumps can be specified as a [`ConstantRateJump`](@ref), +[`MassActionJump`](@ref), or a [`VariableRateJump`](@ref). + +Each individual type of jump that can occur is represented through (implicitly +or explicitly) specifying two pieces of information; a `rate` function (i.e. +intensity or propensity) for the jump and an `affect!` function for the jump. +The former gives the probability per time a particular jump can occur given the +current state of the system, and hence determines the time at which jumps can +happen. The later specifies the instantaneous change in the state of the system +when the jump occurs. + +A specific jump type is a [`VariableRateJump`](@ref) if its rate function is +dependent on values which may change between the occurrence of any two jump +events of the process. Examples include jumps where the rate is an explicit +function of time, or depends on a state variable that is modified via continuous +dynamics such as an ODE or SDE. Such "general" `VariableRateJump`s can be +expensive to simulate because it is necessary to take into account the (possibly +continuous) changes in the rate function when calculating the next jump time. + +*Bounded* [`VariableRateJump`](@ref)s represent a special subset of +`VariableRateJump`s where one can specify functions that calculate a time window +over which the rate is bounded by a constant (presuming the state `u` is +unchanged due to another `ConstantRateJump`, `MassActionJump` or bounded +`VariableRateJump`). They can be simulated more efficiently using +rejection-sampling based approaches that leverage this upper bound. + +[`ConstantRateJump`](@ref)s are more restricted in that they assume the rate +functions are constant at all times between two consecutive jumps of the system. +That is, any states or parameters that a rate function depends on must not +change between the times at which two consecutive jumps occur. + +A [`MassActionJump`](@ref)s is a specialized representation for a collection of +`ConstantRateJump` jumps that can each be interpreted as a standard mass action +reaction. For systems comprised of many mass action reactions, using the +`MassActionJump` type will offer improved performance compared to using +multiple `ConstantRateJump`s. Note, only one `MassActionJump` should be defined +per [`JumpProblem`](@ref); it is then responsible for handling all mass action reaction type jumps. For systems with both mass action jumps and non-mass action jumps, one can create one `MassActionJump` to handle the mass action jumps, and -create a number of `ConstantRateJumps` to handle the non-mass action jumps. - -`RegularJump`s are optimized for regular jumping algorithms like tau-leaping and -hybrid algorithms. `ConstantRateJump`s and `MassActionJump`s are optimized for -SSA algorithms. `ConstantRateJump`s, `MassActionJump`s and `VariableRateJump`s -can be added to standard DiffEq algorithms since they are simply callbacks, -while `RegularJump`s require special algorithms. - -#### Defining a Regular Jump - -The constructor for a `RegularJump` is: - -```julia -RegularJump(rate,c,numjumps;mark_dist = nothing) -``` - -- `rate(out,u,p,t)` is the function which computes the rate for every regular - jump process -- `c(du,u,p,t,counts,mark)` is calculates the update given `counts` number of - jumps for each jump process in the interval. -- `numjumps` is the number of jump processes, i.e. the number of `rate` equations - and the number of `counts` -- `mark_dist` is the distribution for the mark. +create a number of `ConstantRateJump`s or `VariableRateJump`s to handle the +non-mass action jumps. + +Since exact methods simulate each individual jump, they may become +computationally expensive to simulate processes over timescales that involve +*many* jump occurrences. As an alternative, inexact τ-leaping methods take +discrete steps through time, over which they simultaneously execute many jumps. +These methods can be much faster as they do not need to simulate the realization +of every individual jump event. τ-leaping methods trade accuracy for speed, and +are best used when a set of jumps do not make significant changes to the +processes' state and/or rates over the course of one time-step (i.e. during a +leap interval). A single [`RegularJump`](@ref) is used to encode jumps for +τ-leaping algorithms. While τ-leaping methods can be proven to converge in the +limit that the time-step approaches zero, their accuracy can be highly dependent +on the chosen time-step. As a rule of thumb, if changes to the state variable +`u` during a time-step (i.e. leap interval) are "minimal" compared to size of +the system, an τ-leaping method can often provide reasonable solution +approximations. + +Currently, `ConstantRateJump`s, `MassActionJump`s, and `VariableRateJump`s can +be coupled to standard SciML ODE/SDE solvers since they are internally handled +via callbacks. For `ConstantRateJump`s, `MassActionJump`s, and bounded +`VariableRateJump` the determination of the next jump time and type is handled +by a user-selected *aggregator* algorithm. `RegularJump`s currently require +their own special time integrators. #### Defining a Constant Rate Jump -The constructor for a `ConstantRateJump` is: +The constructor for a [`ConstantRateJump`](@ref) is: ```julia -ConstantRateJump(rate,affect!) +ConstantRateJump(rate, affect!) ``` -- `rate(u,p,t)` is a function which calculates the rate given the time and the state. -- `affect!(integrator)` is the effect on the equation, using the integrator interface. - +- `rate(u, p, t)` is a function which calculates the rate given the current + state `u`, parameters `p`, and time `t`. +- `affect!(integrator)` is the effect on the equation using the integrator + interface. It encodes how the state should change due to *one* occurrence of + the jump. #### Defining a Mass Action Jump -The constructor for a `MassActionJump` is: +The constructor for a [`MassActionJump`](@ref) is: ```julia MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs=nothing) ``` - `reactant_stoich` is a vector whose `k`th entry is the reactant stoichiometry of the `k`th reaction. The reactant stoichiometry for an individual reaction - is assumed to be represented as a vector of `Pair`s, mapping species id to - stoichiometric coefficient. + is assumed to be represented as a vector of `Pair`s, mapping species integer + id to stoichiometric coefficient. - `net_stoich` is assumed to have the same type as `reactant_stoich`; a vector whose `k`th entry is the net stoichiometry of the `k`th reaction. The net stoichiometry for an individual reaction is again represented as a vector @@ -99,7 +131,7 @@ MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs=nothi reaction occurs. - `scale_rates` is an optional parameter that specifies whether the rate constants correspond to stochastic rate constants in the sense used by - Gillespie, and hence need to be rescaled. *The default, `scale_rates=true`, + Gillespie, and hence need to be rescaled. *The default, `scale_rates = true`, corresponds to rescaling the passed in rate constants.* See below. - `param_idxs` is a vector of the indices within the parameter vector, `p`, that correspond to the rate constant for each jump. @@ -113,12 +145,13 @@ MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs=nothi ``3A \overset{k}{\rightarrow} B`` the rate function would be `k*A*(A-1)*(A-2)/3!`. To *avoid* having the reaction rates rescaled (by `1/2` and `1/6` for these two examples), one can pass the `MassActionJump` - constructor the optional named parameter `scale_rates=false`, i.e. use + constructor the optional named parameter `scale_rates = false`, i.e. use ```julia MassActionJump(reactant_stoich, net_stoich; scale_rates = false, param_idxs) ``` - Zero order reactions can be passed as `reactant_stoich`s in one of two ways. - Consider the ``\varnothing \overset{k}{\rightarrow} A`` reaction with rate `k=1`: + Consider the ``\varnothing \overset{k}{\rightarrow} A`` reaction with rate + `k=1`: ```julia p = [1.] reactant_stoich = [[0 => 1]] @@ -142,107 +175,221 @@ MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs=nothi reactant_stoich = [[3 => 1, 1 => 2, 4 => 2], [3 => 2, 2 => 2]] ``` - #### Defining a Variable Rate Jump -The constructor for a `VariableRateJump` is: +The constructor for a [`VariableRateJump`](@ref) is: ```julia -VariableRateJump(rate,affect!; - idxs = nothing, - rootfind=true, - save_positions=(true,true), - interp_points=10, - abstol=1e-12,reltol=0) +VariableRateJump(rate, affect!; + lrate = nothing, urate = nothing, rateinterval = nothing, + idxs = nothing, rootfind = true, save_positions = (true,true), + interp_points = 10, abstol = 1e-12, reltol = 0) ``` -Note that this is the same as defining a `ContinuousCallback`, except that instead -of the `condition` function, you provide a `rate(u,p,t)` function for the `rate` at -a given time and state. +- `rate(u, p, t)` is a function which calculates the rate given the current + state `u`, parameters `p`, and time `t`. +- `affect!(integrator)` is the effect on the equation using the integrator + interface. It encodes how the state should change due to *one* occurrence of + the jump. + +To define a bounded `VariableRateJump`, which can be simulated more efficiently +with bounded `VariableRateJump` supporting aggregators such as `Coevolve`, one +must also specify +- `urate(u, p, t)`, a function which computes an upper bound for the rate in the + interval `t` to `t + rateinterval(u, p, t)` at time `t` given state `u` and + parameters `p`. +- `rateinterval(u, p, t)`, a function which computes a time interval `t` to `t + + rateinterval(u, p, t)` given state `u` and parameters `p` over which the + `urate` bound will hold (and `lrate` bound if provided, see below). + +Note that it is ok if the `urate` bound would be violated within the +`rateinterval` due to a change in `u` arising from another `ConstantRateJump`, +`MassActionJump` or *bounded* `VariableRateJump` being executed, as the chosen +aggregator will then handle recalculating the rate bound and interval. *However, +if the bound could be violated within the time interval due to a change in `u` +arising from continuous dynamics such as a coupled ODE, SDE, or a general +`VariableRateJump`, bounds should not be given.* This ensures the jump is +classified as a general `VariableRateJump` and properly handled. + +For increased performance, one can also specify a lower bound that should be +valid over the same `rateinterval` +- `lrate(u, p, t)`, a function which computes a lower bound for the rate in the + interval `t` to `t + rateinterval(u, p, t)` at time `t` given state `u` and + parameters `p`. `lrate` should remain valid under the same conditions as + `urate`. + +Note that +- It is currently only possible to simulate `VariableRateJump`s with + `SSAStepper` when using systems with only bounded `VariableRateJump`s and the + `Coevolve` aggregator. +- When choosing a different aggregator than `Coevolve`, `SSAStepper` can not + currently be used, and the `JumpProblem` must be coupled to a continuous + problem type such as an `ODEProblem` to handle time-stepping. The continuous + time-stepper treats *all* `VariableRateJump`s as `ContinuousCallback`s, using + the `rate(u, p, t)` function to construct the `condition` function that + triggers a callback. + + +#### Defining a Regular Jump + +The constructor for a [`RegularJump`](@ref) is: + +```julia +RegularJump(rate, c, numjumps; mark_dist = nothing) +``` + +- `rate(out, u, p, t)` is the function which computes the rate for every regular + jump process +- `c(du, u, p, t, counts, mark)` calculates the update given `counts` number of + jumps for each jump process in the interval. +- `numjumps` is the number of jump processes, i.e. the number of `rate` + equations and the number of `counts`. +- `mark_dist` is the distribution for a mark. ## Defining a Jump Problem -To define a `JumpProblem`, you must first define the basic problem. This can be +To define a `JumpProblem`, one must first define the basic problem. This can be a `DiscreteProblem` if there is no differential equation, or an ODE/SDE/DDE/DAE if you would like to augment a differential equation with jumps. Denote this -previously defined problem as `prob`. Then the constructor for the jump problem is: +previously defined problem as `prob`. Then the constructor for the jump problem +is: ```julia -JumpProblem(prob,aggregator::Direct,jumps::JumpSet; +JumpProblem(prob, aggregator, jumps::JumpSet; save_positions = typeof(prob) <: AbstractDiscreteProblem ? (false,true) : (true,true)) ``` -The aggregator is the method for aggregating the constant jumps. These are defined -below. `jumps` is a `JumpSet` which is just a gathering of jumps. Instead of -passing a `JumpSet`, one may just pass a list of jumps themselves. For example: +The aggregator is the method for simulating `ConstantRateJump`s, +`MassActionJump`s, and bounded `VariableRateJump`s (if supported by the +aggregator). They are called aggregators since they resolve all these jumps in a +single discrete simulation algorithm. The possible aggregators are given below. +`jumps` is a [`JumpSet`](@ref) which is just a collection of jumps. Instead of +passing a `JumpSet`, one may just pass a list of jumps as trailing positional +arguments. For example: ```julia -JumpProblem(prob,aggregator,jump1,jump2) +JumpProblem(prob, aggregator, jump1, jump2) ``` -and the internals will automatically build the `JumpSet`. `save_positions` is the -`save_positions` argument built by the aggregation of the constant rate jumps. +and the internals will automatically build the `JumpSet`. `save_positions` +determines whether to save the state of the system just before and/or after +jumps occur. Note that a `JumpProblem`/`JumpSet` can only have 1 `RegularJump` (since a `RegularJump` itself describes multiple processes together). Similarly, it can only have one `MassActionJump` (since it also describes multiple processes together). -## Constant Rate Jump Aggregators +## Jump Aggregators for Exact Simulation -Constant rate jump aggregators are the methods by which constant rate -jumps, including `MassActionJump`s, are lumped together. This is required in all -algorithms for both speed and accuracy. The current methods are: +Jump aggregators are methods for simulating `ConstantRateJump`s, +`MassActionJump`s, and bounded `VariableRateJump`s (if supported) exactly. They +are called aggregators since they combine all jumps to handle within a single +discrete simulation algorithm. Aggregators combine jumps in different ways and +offer different trade-offs. However, all aggregators describe the realization of +each and every individual jump chronologically. Since they do not skip any +jumps, they are considered exact methods. Note that none of the aggregators +discussed in this section can be used with `RegularJumps` which are used for +time-step based (inexact) τ-leaping methods. -- `Direct`: the Gillespie Direct method SSA. -- `RDirect`: A variant of Gillespie's Direct method that uses rejection to - sample the next reaction. -- *`DirectCR`*: The Composition-Rejection Direct method of Slepoy et al. For - large networks and linear chain-type networks it will often give better - performance than `Direct`. (Requires dependency graph, see below.) -- `DirectFW`: the Gillespie Direct method SSA with `FunctionWrappers`. This +The current aggregators are (note that an italicized name indicates the +aggregator requires various types of dependency graphs, see the next section): + +- `Direct`: The Gillespie Direct method SSA [1]. +- `DirectFW`: the Gillespie Direct method SSA [1] with `FunctionWrappers`. This aggregator uses a different internal storage format for collections of `ConstantRateJumps`. -- `FRM`: the Gillespie first reaction method SSA. `Direct` should generally +- *`DirectCR`*: The Composition-Rejection Direct method of Slepoy et al [2]. For + large networks and linear chain-type networks it will often give better + performance than `Direct`. +- *`SortingDirect`*: The Sorting Direct Method of McCollum et al [3]. It will + usually offer performance as good as `Direct`, and for some systems can offer + substantially better performance. +- *`RSSA`*: The Rejection SSA (RSSA) method of Thanh et al [4,5]. With `RSSACR`, + for very large reaction networks it often offers the best performance of all + methods. +- *`RSSACR`*: The Rejection SSA (RSSA) with Composition-Rejection method of + Thanh et al [6]. With `RSSA`, for very large reaction networks it often offers + the best performance of all methods. +- `RDirect`: A variant of Gillespie's Direct method [1] that uses rejection to + sample the next reaction. +- `FRM`: The Gillespie first reaction method SSA [1]. `Direct` should generally offer better performance and be preferred to `FRM`. -- `FRMFW`: the Gillespie first reaction method SSA with `FunctionWrappers`. -- *`NRM`*: The Gibson-Bruck Next Reaction Method. For some reaction network +- `FRMFW`: The Gillespie first reaction method SSA [1] with `FunctionWrappers`. +- *`NRM`*: The Gibson-Bruck Next Reaction Method [7]. For some reaction network structures this may offer better performance than `Direct` (for example, - large, linear chains of reactions). (Requires dependency graph, see below.) -- *`RSSA`*: The Rejection SSA (RSSA) method of Thanh et al. With `RSSACR`, for - very large reaction networks it often offers the best performance of all - methods. (Requires dependency graph, see below.) -- *`RSSACR`*: The Rejection SSA (RSSA) with Composition-Rejection method of - Thanh et al. With `RSSA`, for very large reaction networks it often offers the - best performance of all methods. (Requires dependency graph, see below.) -- *`SortingDirect`*: The Sorting Direct Method of McCollum et al. It will - usually offer performance as good as `Direct`, and for some systems can offer - substantially better performance. (Requires dependency graph, see below.) + large, linear chains of reactions). +- *`Coevolve`*: An adaptation of the COEVOLVE algorithm of Farajtabar et al [8]. + Currently the only aggregator that also supports *bounded* + `VariableRateJump`s. Essentially reduces to `NRM` in handling + `ConstantRateJump`s and `MassActionJump`s. To pass the aggregator, pass the instantiation of the type. For example: ```julia -JumpProblem(prob,Direct(),jump1,jump2) +JumpProblem(prob, Direct(), jump1, jump2) ``` -will build a problem where the constant rate jumps are solved using Gillespie's -Direct SSA method. - -## Constant Rate Jump Aggregators Requiring Dependency Graphs -Italicized constant rate jump aggregators require the user to pass a dependency -graph to `JumpProblem`. `DirectCR`, `NRM` and `SortingDirect` require a -jump-jump dependency graph, passed through the named parameter `dep_graph`. i.e. +will build a problem where the jumps are simulated using Gillespie's Direct SSA +method. + +[1] Daniel T. Gillespie, A general method for numerically simulating the stochastic +time evolution of coupled chemical reactions, Journal of Computational Physics, +22 (4), 403–434 (1976). doi:10.1016/0021-9991(76)90041-3. + +[2] A. Slepoy, A.P. Thompson and S.J. Plimpton, A constant-time kinetic Monte +Carlo algorithm for simulation of large biochemical reaction networks, Journal +of Chemical Physics, 128 (20), 205101 (2008). doi:10.1063/1.2919546. + +[3] J. M. McCollum, G. D. Peterson, C. D. Cox, M. L. Simpson and N. F. +Samatova, The sorting direct method for stochastic simulation of biochemical +systems with varying reaction execution behavior, Computational Biology and +Chemistry, 30 (1), 39049 (2006). doi:10.1016/j.compbiolchem.2005.10.007. + +[4] V. H. Thanh, C. Priami and R. Zunino, Efficient rejection-based simulation +of biochemical reactions with stochastic noise and delays, Journal of Chemical +Physics, 141 (13), 134116 (2014). doi:10.1063/1.4896985. + +[5] V. H. Thanh, R. Zunino and C. Priami, On the rejection-based algorithm for +simulation and analysis of large-scale reaction networks, Journal of Chemical +Physics, 142 (24), 244106 (2015). doi:10.1063/1.4922923. + +[6] V. H. Thanh, R. Zunino, and C. Priami, Efficient constant-time complexity +algorithm for stochastic simulation of large reaction networks, IEEE/ACM +Transactions on Computational Biology and Bioinformatics, 14 (3), 657-667 +(2017). doi:10.1109/TCBB.2016.2530066. + +[7] M. A. Gibson and J. Bruck, Efficient exact stochastic simulation of chemical +systems with many species and many channels, Journal of Physical Chemistry A, +104 (9), 1876-1889 (2000). doi:10.1021/jp993732q. + +[8] M. Farajtabar, Y. Wang, M. Gomez-Rodriguez, S. Li, H. Zha, and L. Song, +COEVOLVE: a joint point process model for information diffusion and network +evolution, Journal of Machine Learning Research 18(1), 1305–1353 (2017). doi: +10.5555/3122009.3122050. + +## Jump Aggregators Requiring Dependency Graphs +Italicized constant rate jump aggregators above require the user to pass a +dependency graph to `JumpProblem`. `Coevolve`, `DirectCR`, `NRM`, and + `SortingDirect` require a jump-jump dependency graph, passed through the named +parameter `dep_graph`. i.e. ```julia -JumpProblem(prob,DirectCR(),jump1,jump2; dep_graph=your_dependency_graph) +JumpProblem(prob, DirectCR(), jump1, jump2; dep_graph = your_dependency_graph) ``` For systems with only `MassActionJump`s, or those generated from a -[Catalyst](https://docs.sciml.ai/Catalyst/stable/) `reaction_network`, this graph -will be auto-generated. Otherwise you must construct the dependency graph -manually. Dependency graphs are represented as a `Vector{Vector{Int}}`, with the -`i`th vector containing the indices of the jumps for which rates must be -recalculated when the `i`th jump occurs. Internally, all `MassActionJump`s are -ordered before `ConstantRateJump`s (with the latter internally ordered in the -same order they were passed in). +[Catalyst](https://docs.sciml.ai/Catalyst/stable/) `reaction_network`, this +graph will be auto-generated. Otherwise, you must construct the dependency graph +whenever the set of jumps include `ConstantRateJump`s and/or bounded +`VariableRateJump`s. + +Dependency graphs are represented as a `Vector{Vector{Int}}`, with the `i`th +vector containing the indices of the jumps for which rates must be recalculated +when the `i`th jump occurs. Internally, all `MassActionJump`s are ordered before +`ConstantRateJump`s and bounded `VariableRateJump`s. General `VariableRateJump`s +are not handled by aggregators, and so not included in the jump ordering for +dependency graphs. Note that the relative order between `ConstantRateJump`s and +relative order between bounded `VariableRateJump`s is preserved. In this way one +can precalculate the jump order to manually construct dependency graphs. `RSSA` and `RSSACR` require two different types of dependency graphs, passed through the following `JumpProblem` kwargs: @@ -257,20 +404,34 @@ For systems generated from a [Catalyst](https://docs.sciml.ai/Catalyst/stable/) `reaction_network` these will be auto-generated. Otherwise you must explicitly construct and pass in these mappings. -## Recommendations for Constant Rate Jumps -For representing and aggregating constant rate jumps +## Recommendations for exact methods +For representing and aggregating jumps - Use a `MassActionJump` to handle all jumps that can be represented as mass - action reactions. This will generally offer the fastest performance. -- Use `ConstantRateJump`s for any remaining jumps. + action reactions with constant rate between jumps. This will generally offer + the fastest performance. +- Use `ConstantRateJump`s for any remaining jumps with a constant rate between + jumps. +- Use `VariableRateJump`s for any remaining jumps with variable rate between + jumps. If possible, construct a bounded [`VariableRateJump`](@ref) as + described above and in the doc string. The tighter and easier to compute the + bounds are, the faster the resulting simulation will be. Use the `Coevolve` + aggregator to ensure such jumps are handled via the more efficient aggregator + interface. + +For systems with only `ConstantRateJump`s and `MassActionJump`s, - For a small number of jumps, < ~10, `Direct` will often perform as well as the other aggregators. -- For > ~10 jumps `SortingDirect` will often offer better performance than `Direct`. +- For > ~10 jumps `SortingDirect` will often offer better performance than + `Direct`. - For large numbers of jumps with sparse chain like structures and similar jump rates, for example continuous time random walks, `RSSACR`, `DirectCR` and then `NRM` often have the best performance. - For very large networks, with many updates per jump, `RSSA` and `RSSACR` will often substantially outperform the other methods. +For pure jump systems, time-step using `SSAStepper()` with a `DiscreteProblem` +unless one has general (i.e. non-bounded) `VariableRateJump`s. + In general, for systems with sparse dependency graphs if `Direct` is slow, one of `SortingDirect`, `RSSA` or `RSSACR` will usually offer substantially better performance. See @@ -288,44 +449,44 @@ components of the SSA aggregators. As such, only the new problem generated by As an example, consider the following SIR model: ```julia -rate1(u,p,t) = (0.1/1000.0)*u[1]*u[2] +rate1(u, p, t) = p[1] * u[1] * u[2] function affect1!(integrator) integrator.u[1] -= 1 integrator.u[2] += 1 end -jump = ConstantRateJump(rate1,affect1!) +jump = ConstantRateJump(rate1, affect1!) -rate2(u,p,t) = 0.01u[2] +rate2(u,p,t) = p[2] * u[2] function affect2!(integrator) integrator.u[2] -= 1 integrator.u[3] += 1 end -jump2 = ConstantRateJump(rate2,affect2!) -u0 = [999,1,0] -p = (0.1/1000,0.01) -tspan = (0.0,250.0) +jump2 = ConstantRateJump(rate2, affect2!) +u0 = [999, 1, 0] +p = (0.1/1000, 0.01) +tspan = (0.0, 250.0) dprob = DiscreteProblem(u0, tspan, p) jprob = JumpProblem(dprob, Direct(), jump, jump2) sol = solve(jprob, SSAStepper()) ``` -We can change any of `u0`, `p` and `tspan` by either making a new +We can change any of `u0`, `p` and/or `tspan` by either making a new `DiscreteProblem` ```julia -u02 = [10,1,0] +u02 = [10, 1, 0] p2 = (.1/1000, 0.0) -tspan2 = (0.0,2500.0) +tspan2 = (0.0, 2500.0) dprob2 = DiscreteProblem(u02, tspan2, p2) -jprob2 = remake(jprob, prob=dprob2) +jprob2 = remake(jprob, prob = dprob2) sol2 = solve(jprob2, SSAStepper()) ``` or by directly remaking with the new parameters ```julia -jprob2 = remake(jprob, u0=u02, p=p2, tspan=tspan2) +jprob2 = remake(jprob, u0 = u02, p = p2, tspan = tspan2) sol2 = solve(jprob2, SSAStepper()) ``` To avoid ambiguities, the following will give an error ```julia -jprob2 = remake(jprob, prob=dprob2, u0=u02) +jprob2 = remake(jprob, prob = dprob2, u0 = u02) ``` as will trying to update either `p` or `tspan` while passing a new `DiscreteProblem` using the `prob` kwarg. diff --git a/docs/src/tutorials/discrete_stochastic_example.md b/docs/src/tutorials/discrete_stochastic_example.md index ad9634b2..52453744 100644 --- a/docs/src/tutorials/discrete_stochastic_example.md +++ b/docs/src/tutorials/discrete_stochastic_example.md @@ -1,18 +1,20 @@ # [Continuous-Time Jump Processes and Gillespie Methods](@id ssa_tutorial) In this tutorial we will describe how to define and simulate continuous-time -jump processes, also known in biological fields as stochastic chemical kinetics -(i.e. Gillespie) models. It is not necessary to have read the [first -tutorial](@ref poisson_proc_tutorial). We will illustrate +jump (or point) processes, also known in biological fields as stochastic +chemical kinetics (i.e. Gillespie) models. It is not necessary to have read the +[first tutorial](@ref poisson_proc_tutorial). We will illustrate - The different types of jumps that can be represented in JumpProcesses and their use cases. -- How to speed up pure-jump simulations with only [`ConstantRateJump`](@ref)s - and [`MassActionJump`](@ref)s by using the [`SSAStepper`](@ref) time stepper. +- How to speed up pure-jump simulations with only [`ConstantRateJump`](@ref)s, + [`MassActionJump`](@ref)s, and bounded `VariableRateJump`s by using the + [`SSAStepper`](@ref) time stepper. - How to define and use [`MassActionJump`](@ref)s, a more specialized type of [`ConstantRateJump`](@ref) that offers improved computational performance. +- How to define and use bounded [`VariableRateJump`](@ref)s in pure-jump simulations. - How to use saving controls to reduce memory use per simulation. -- How to use [`VariableRateJump`](@ref)s and when they should be preferred over - `ConstantRateJump`s and `MassActionJump`s. +- How to use general [`VariableRateJump`](@ref)s and when they should be + preferred over the other jump types. - How to create hybrid problems mixing the various jump types with ODEs or SDEs. - How to use `RegularJump`s to enable faster, but approximate, time stepping via τ-leaping methods. @@ -136,8 +138,8 @@ is then given by the rate constant multiplied by the number of possible pairs of susceptible and infected people. This formulation is known as the [law of mass action](https://en.wikipedia.org/wiki/Law_of_mass_action). Similarly, we have that each individual infected person is assumed to recover with probability per -time ``\nu``, so that the probability per time *some* infected person becomes -recovered is ``\nu`` times the number of infected people, i.e. ``\nu I(t)``. +time ``\nu``, so that the probability per time *some* infected person recovers +is ``\nu`` times the number of infected people, i.e. ``\nu I(t)``. Rate functions give the probability per time for each of the two types of jumps to occur, and hence determine when the state of our system changes. To fully @@ -202,7 +204,7 @@ jump_prob = JumpProblem(sir_model, prob, Direct()) Here `Direct()` indicates that we will determine the random times and types of reactions using [Gillespie's Direct stochastic simulation algorithm (SSA)](https://doi.org/10.1016/0021-9991(76)90041-3), also known as Doob's -method or Kinetic Monte Carlo. See [Constant Rate Jump Aggregators](@ref) for +method or Kinetic Monte Carlo. See [Jump Aggregators for Exact Simulation](@ref) for other supported SSAs. We now have a problem that can be evolved in time using the JumpProcesses solvers. @@ -253,14 +255,25 @@ In general | Jump Type | Performance | Generality | |:----------: | :----------: |:------------:| | [`MassActionJump`](@ref MassActionJumpSect) | Fastest | Restrictive rates/affects | -| [`ConstantRateJump`](@ref ConstantRateJumpSect) | Somewhat Slower | Much more general | +| [`ConstantRateJump`](@ref ConstantRateJumpSect) | Somewhat Slower than `MassActionJump`| Rate function must be constant between jumps | +| [`VariableRateJump` with rate bounds](@ref VariableRateJumpWithBnds) | Somewhat Slower than `ConstantRateJump` | Rate functions can explicitly depend on time, but require an upper bound that is guaranteed constant between jumps over some time interval | | [`VariableRateJump`](@ref VariableRateJumpSect) | Slowest | Completely general | It is recommended to try to encode jumps using the most performant option that -supports the desired generality of the underlying `rate` and `affect` functions. -Below we describe the different jump types, and show how the SIR model can be -formulated using first `ConstantRateJump`s and then `MassActionJump`s -(`VariableRateJump`s are considered later). +supports the desired generality of the underlying `rate` and `affect!` +functions. Below we describe the different jump types, and show how the SIR +model can be formulated using first `ConstantRateJump`s, then more performant +`MassActionJump`s, and finally with `VariableRateJump`s using rate bounds. We +conclude by presenting several completely general models that use +`VariableRateJump`s without rate bounds, and which require an ODE solver to +handle time-stepping. + +We note, in the remainder we will refer to *bounded* `VariableRateJump`s as +those for which we can specify functions calculating a time window over which +the rate is bounded by a constant (as long as the state is unchanged), see [the +section on bounded `VariableRateJump`s for details](@ref VariableRateJumpWithBnds). +`VariableRateJump`s or *general* `VariableRateJump`s will refer to those for +which such functions are not available. ## [Defining the Jumps Directly: `ConstantRateJump`](@id ConstantRateJumpSect) The constructor for a `ConstantRateJump` is: @@ -322,7 +335,7 @@ jump_prob = JumpProblem(prob, Direct(), jump, jump2) Here [`Direct()`](@ref) indicates that we will determine the random times and types of jumps that occur using [Gillespie's Direct stochastic simulation algorithm (SSA)](https://doi.org/10.1016/0021-9991(76)90041-3), also known as -Doob's method or Kinetic Monte Carlo. See [Constant Rate Jump Aggregators](@ref) +Doob's method or Kinetic Monte Carlo. See [Jump Aggregators for Exact Simulation](@ref) for other supported SSAs. We now have a problem that can be evolved in time using the JumpProcesses solvers. @@ -343,86 +356,20 @@ plot(sol, label=["S(t)" "I(t)" "R(t)"]) Note, in systems with more than a few jumps (more than ~10), it can be advantageous to use more sophisticated SSAs than `Direct`. For such systems it is recommended to use [`SortingDirect`](@ref), [`RSSA`](@ref) or -[`RSSACR`](@ref), see the list of JumpProcesses SSAs at [Constant Rate Jump -Aggregators](@ref). - - -### *Caution about Constant Rate Jumps* -`ConstantRateJump`s are quite general, but they do have one restriction. They -assume that the rate functions are constant at all times between two consecutive -jumps of the system. i.e. any species/states or parameters that the rate -function depends on must not change between the times at which two consecutive -jumps occur. Such conditions are violated if one has a time dependent parameter -like ``\beta(t)`` or if some of the solution components, say `u[2]`, may also -evolve through a coupled ODE, SDE, or a [`VariableRateJump`](@ref) (see below -for examples). For problems where the rate function may change between -consecutive jumps, [`VariableRateJump`](@ref)s must be used. - -Thus in the examples above, -```julia -rate1(u,p,t) = p[1]*u[1]*u[2] -rate2(u,p,t) = p[2]*u[2] -``` -both must be constant other than changes due to some other `ConstantRateJump` or -`MassActionJump` (the same restriction applies to `MassActionJump`s). Since -these rates only change when `u[1]` or `u[2]` is changed, and `u[1]` and `u[2]` -only change when one of the jumps occur, this setup is valid. However, a rate of -`t*p[1]*u[1]*u[2]` would not be valid because the rate would change during the -interval, as would `p[2]*u[1]*u[4]` when `u[4]` is the solution to a continuous -problem such as an ODE or SDE or can be changed via a `VariableRateJump`. Thus -one must be careful to follow this rule when choosing rates. - -In summary, if a particular jump process has a rate function that depends -explicitly or implicitly on a continuously changing quantity, you need to use a -[`VariableRateJump`](@ref). +[`RSSACR`](@ref), see the list of JumpProcesses SSAs at [Jump Aggregators for +Exact Simulation](@ref). ## SSAStepper Any common interface algorithm can be used to perform the time-stepping since it is implemented over the callback interface. This allows for hybrid systems that mix ODEs, SDEs and jumps. In many cases we may have a pure jump system that only -involves `ConstantRateJump`s and/or `MassActionJump`s (see below). When that's -the case, a substantial performance benefit may be gained by using -[`SSAStepper`](@ref). Note, `SSAStepper` is a more limited time-stepper which -only supports discrete events, and does not allow simultaneous coupled ODEs or -SDEs or `VariableRateJump`s. It is, however, very efficient for pure jump -problems involving only `ConstantRateJump`s and `MassActionJump`s. - -## [Reducing Memory Use: Controlling Saving Behavior](@id save_positions_docs) - -Note that jumps act via DifferentialEquations.jl's [callback -interface](https://docs.sciml.ai/DiffEqDocs/stable/features/callback_functions/), -which defaults to saving at each event. This is required in order to accurately -resolve every discontinuity exactly (and this is what allows for perfectly -vertical lines in plots!). However, in many cases when using jump problems you -may wish to decrease the saving pressure given by large numbers of jumps. To do -this, you set the `save_positions` keyword argument to `JumpProblem`. Just like -for other -[callbacks](https://docs.sciml.ai/DiffEqDocs/stable/features/callback_functions/), -this is a tuple `(bool1, bool2)` which sets whether to save before or after a -jump. If we do not want to save at every jump, we would thus pass: -```@example tut2 -jump_prob = JumpProblem(prob, Direct(), jump, jump2; save_positions = (false, false)) -``` -Now the saving controls associated with the integrator should specified, see the -main [SciML -Docs](https://docs.sciml.ai/DiffEqDocs/stable/basics/common_solver_opts/) -for saving options. For example, we can use `saveat = 10.0` to save at an evenly -spaced grid: -```@example tut2 -sol = solve(jump_prob, SSAStepper(); saveat = 10.0) - -# we plot each solution component separately since -# the graph should no longer be a step function -plot(sol.t, sol[1,:]; marker = :o, label="S(t)", xlabel="t") -plot!(sol.t, sol[2,:]; marker = :x, label="I(t)", xlabel="t") -plot!(sol.t, sol[3,:]; marker = :d, label="R(t)", xlabel="t") -``` -Notice that our plot (and solutions) are now defined at precisely the specified -time points. *It is important to note that interpolation of the solution object -will no longer be exact for a pure jump process, as the solution values at jump -times have not been stored. i.e for `t` a time we did not save at `sol(t)` will -no longer give the exact value of the solution at `t`.* - +involves `ConstantRateJump`s, `MassActionJump`s, and/or bounded +`VariableRateJump`s (see below). In those cases a substantial performance +benefit may be gained by using [`SSAStepper`](@ref). Note, `SSAStepper` is a +more limited time-stepper which only supports discrete events, and does not +allow simultaneous coupled ODEs/SDEs, or general `VariableRateJump`s. It is, +however, very efficient for pure jump problems involving only +`ConstantRateJump`s, `MassActionJump`s, and bounded `VariableRateJump`s. ## [Defining the Jumps Directly: `MassActionJump`](@id MassActionJumpSect) For `ConstantRateJump`s that can be represented as mass action reactions a @@ -499,19 +446,191 @@ function that gets evaluated is ``` with ``\hat{k} = k / \prod_{i=1}^{N} R_i!`` the renormalized rate constant. Passing the keyword argument `scale_rates = false` will disable -`MassActionJump`s internally rescaling the rate constant by `\prod_{i=1}^{N} -R_i!`. +`MassActionJump`s internally rescaling the rate constant by ``(\prod_{i=1}^{N} +R_i!)^{-1}``. For chemical reaction systems Catalyst.jl automatically groups reactions into their optimal jump representation. +### *Caution about ConstantRateJumps and MassActionJumps* +`ConstantRateJump`s and `MassActionJump`s are restricted in that they assume the +rate functions are constant at all times between two consecutive jumps of the +system. That is, any species/states or parameters that a rate function depends +on must not change between the times at which two consecutive jumps occur. Such +conditions are violated if one has a time dependent parameter like ``\beta(t)`` +or if some of the solution components, say `u[2]`, may also evolve through a +coupled ODE, SDE, or a general [`VariableRateJump`](@ref) (see below for +examples). For problems where the rate function may change between consecutive +jumps, bounded or general [`VariableRateJump`](@ref)s must be used. + +Thus in the examples above, +```julia +rate1(u,p,t) = p[1]*u[1]*u[2] +rate2(u,p,t) = p[2]*u[2] +``` +both must be constant other than changes due to some other `ConstantRateJump` or +`MassActionJump` (the same restriction applies to `MassActionJump`s). Since +these rates only change when `u[1]` or `u[2]` is changed, and `u[1]` and `u[2]` +only change when one of the jumps occur, this setup is valid. However, a rate of +`t*p[1]*u[1]*u[2]` would not be valid because the rate would change in between +jumps, as would `p[2]*u[1]*u[4]` when `u[4]` is the solution to a continuous +problem such as an ODE/SDE, or can be changed by a general `VariableRateJump`. +Thus one must be careful to follow this rule when choosing rates. + +In summary, if a particular jump process has a rate function that depends +explicitly or implicitly on a continuously changing quantity, you need to use a +[`VariableRateJump`](@ref). + +## [Defining the Jumps Directly using a bounded `VariableRateJump`](@id VariableRateJumpWithBnds) + +Assume that the infection rate is now decreasing over time. That is, when +individuals get infected they immediately reach peak infectivity. The force of +infection then decreases exponentially to a basal level. In this case, we must +keep track of the time of infection events. Let the history ``H(t)`` contain the +timestamps of all ``I(t)`` active infections. The rate of infection is then +```math +\beta_1 S(t) I(t) + \alpha S(t) \sum_{t_i \in H(t)} \exp(-\gamma (t - t_i)) +``` +where ``\beta_1`` is the basal rate of infection, ``\alpha`` is the spike in the +rate of infection, and ``\gamma`` is the rate at which the spike decreases. Here +we choose parameters such that infectivity rate due to a single infected +individual returns to the basal rate after spiking to ``\beta_1 + \alpha``. In +other words, we are modelling a situation in infected individuals gradually +become less infectious prior to recovering. Our parameters are then -## Defining the Jumps Directly: Mixing `ConstantRateJump` and `MassActionJump` -Suppose we now want to add in to the SIR model another jump that can not be -represented as a mass action reaction. We can create a new `ConstantRateJump` -and simulate a hybrid system using both the `MassActionJump` for the two -previous reactions, and the new `ConstantRateJump`. Let's suppose we want to let -susceptible people be born with the following jump rate: +```@example tut2 +β1 = 0.001 / 1000.0 +α = 0.1 / 1000.0 +γ = 0.05 +p1 = (β1, ν, α, γ) +``` + +We define a vector `H` to hold the timestamp of active infections. Then, we +define an infection reaction as a bounded `VariableRateJump`, requiring us to +again provide `rate` and `affect` functions, but also give functions that +calculate an upper-bound on the rate (`urate(u,p,t)`), an optional lower-bound +on the rate (`lrate(u,p,t)`), and a time window over which the bounds are valid +as long as any states these three rates depend on are unchanged +(`rateinterval(u,p,t)`). The lower- and upper-bounds of the rate should be valid +from the time they are computed `t` until `t + rateinterval(u, p, t)`: + +```@example tut2 +H = zeros(Float64, 10) +rate3(u, p, t) = p[1]*u[1]*u[2] + p[3]*u[1]*sum(exp(-p[4]*(t - _t)) for _t in H) +lrate = rate1 # β*S*I +urate = rate3 +rateinterval(u, p, t) = 1 / (2*urate(u, p, t)) +function affect3!(integrator) + integrator.u[1] -= 1 # S -> S - 1 + integrator.u[2] += 1 # I -> I + 1 + push!(H, integrator.t) + nothing +end +jump3 = VariableRateJump(rate3, affect3!; lrate, urate, rateinterval) +``` +Note that here we set the lower bound rate to be the normal SIR infection rate, +and set the upper bound rate equal to the new rate of infection (`rate3`). As +required for bounded `VariableRateJump`s, we have for any `s` in `[t,t + +rateinterval(u,p,t)]` the bound `lrate(u,p,t) <= rate3(u,p,s) <= urate(u,p,t)` +will hold provided the dependent states `u[1]` and `u[2]` have not changed. + +Next, we redefine the recovery jump's `affect!` such that a random infection is +removed from `H` for every recovery. + +```@example tut2 +rate4(u, p, t) = p[2] * u[2] # ν*I +function affect4!(integrator) + integrator.u[2] -= 1 + integrator.u[3] += 1 + length(H) > 0 && deleteat!(H, rand(1:length(H))) + nothing +end +jump4 = ConstantRateJump(rate4, affect4!) +``` + +With the jumps defined, we can build a +[`DiscreteProblem`](https://docs.sciml.ai/DiffEqDocs/stable/types/discrete_types/). +Bounded `VariableRateJump`s over a `DiscreteProblem` can currently only be +simulated with the `Coevolve` aggregator. The aggregator requires a dependency +graph to indicate when a given jump occurs which other jumps in the system +should have their rate recalculated (i.e. their rate depends on states modified +by one occurrence of the first jump). This ensures that rates, rate bounds, and +rate intervals are recalculated when invalidated due to changes in `u`. For the +current example, both processes mutually affect each other so we have + +```@example tut2 +dep_graph = [[1,2], [1,2]] +``` +Here `dep_graph[2] = [1,2]` indicates that when the second jump occurs, both the +first and second jumps need to have their rates recalculated. We can then +construct our `JumpProblem` as before, specifying the `Coevolve` aggregator: + +```@example tut2 +prob = DiscreteProblem(u₀, tspan, p1) +jump_prob = JumpProblem(prob, Coevolve(), jump3, jump4; dep_graph) +``` + +We now have a problem that can be solved with `SSAStepper` to handle +time-stepping the `Coevolve` aggregator from jump to jump: + +```@example tut2 +sol = solve(jump_prob, SSAStepper()) +plot(sol, label=["S(t)" "I(t)" "R(t)"]) +``` + +We see that the time-dependent infection rate leads to a lower peak of the +infection throughout the population. + +Note that bounded `VariableRateJump`s over `DiscreteProblem`s can be quite +general, but it is not possible to handle rates that change according to an +ODE/SDE modified variable. A rate such as `p[2]*u[1]*u[4]` when `u[4]` is the +solution of a continuous problem such as an ODE or SDE can only be handled using +a general `VariableRateJump` within a continuous integrator as discussed +[below](@ref VariableRateJumpSect) + +## [Reducing Memory Use: Controlling Saving Behavior](@id save_positions_docs) + +Note that jumps act via DifferentialEquations.jl's [callback +interface](https://docs.sciml.ai/DiffEqDocs/stable/features/callback_functions/), +which defaults to saving at each event. This is required in order to accurately +resolve every discontinuity exactly (and this is what allows for perfectly +vertical lines in plots!). However, in many cases when using jump problems you +may wish to decrease the saving pressure given by large numbers of jumps. To do +this, you set the `save_positions` keyword argument to `JumpProblem`. Just like +for other +[callbacks](https://docs.sciml.ai/DiffEqDocs/stable/features/callback_functions/), +this is a tuple `(bool1, bool2)` which sets whether to save before or after a +jump. If we do not want to save at every jump, we would thus pass: +```@example tut2 +prob = DiscreteProblem(u₀, tspan, p) +jump_prob = JumpProblem(prob, Direct(), jump, jump2; save_positions = (false, false)) +``` +Now the saving controls associated with the integrator should specified, see the +main [SciML +Docs](https://docs.sciml.ai/DiffEqDocs/stable/basics/common_solver_opts/) +for saving options. For example, we can use `saveat = 10.0` to save at an evenly +spaced grid: +```@example tut2 +sol = solve(jump_prob, SSAStepper(); saveat = 10.0) + +# we plot each solution component separately since +# the graph should no longer be a step function +plot(sol.t, sol[1,:]; marker = :o, label="S(t)", xlabel="t") +plot!(sol.t, sol[2,:]; marker = :x, label="I(t)", xlabel="t") +plot!(sol.t, sol[3,:]; marker = :d, label="R(t)", xlabel="t") +``` +Notice that our plot (and solutions) are now defined at precisely the specified +time points. *It is important to note that interpolation of the solution object +will no longer be exact for a pure jump process, as the solution values at jump +times have not been stored. i.e for `t` a time we did not save at `sol(t)` will +no longer give the exact value of the solution at `t`.* + +## Defining the Jumps Directly: Mixing `ConstantRateJump`/`VariableRateJump` and `MassActionJump` +Suppose we now want to add in to the original SIR model another jump that can +not be represented as a mass action reaction. We can create a new +`ConstantRateJump` and simulate a hybrid system using both the `MassActionJump` +for the two original reactions, and the new `ConstantRateJump`. Let's suppose we +want to let susceptible people be born with the following jump rate: ```@example tut2 birth_rate(u,p,t) = 10.0 * u[1] / (200.0 + u[1]) + 10.0 function birth_affect!(integrator) @@ -527,11 +646,14 @@ sol = solve(jump_prob, SSAStepper()) plot(sol; label=["S(t)" "I(t)" "R(t)"]) ``` +Note that we can combine `MassActionJump`s, `ConstantRateJump`s and bounded +`VariableRateJump`s using the `Coevolve` aggregator. + ## Adding Jumps to a Differential Equation -If we instead used some form of differential equation instead of a -`DiscreteProblem`, we would couple the jumps/reactions to the differential -equation. Let's define an ODE problem, where the continuous part only acts on -some new 4th component: +If we instead used some form of differential equation via an `ODEProblem` +instead of a `DiscreteProblem`, we can couple the jumps/reactions to the +differential equation. Let's define an ODE problem, where the continuous part +only acts on some new 4th component: ```@example tut2 using OrdinaryDiffEq function f(du, u, p, t) @@ -544,40 +666,50 @@ prob = ODEProblem(f, u₀, tspan, p) Notice we gave the 4th component a starting value of 100.0, and used floating point numbers for the initial condition since some solution components now evolve continuously. The same steps as above will allow us to solve this hybrid -equation when using `ConstantRateJumps` (or `MassActionJump`s). For example, we -can solve it using the `Tsit5()` method via: +equation when using `ConstantRateJump`s, `MassActionJump`s, or +`VariableRateJump`s. For example, we can solve it using the `Tsit5()` method +via: ```@example tut2 jump_prob = JumpProblem(prob, Direct(), jump, jump2) sol = solve(jump_prob, Tsit5()) plot(sol; label=["S(t)" "I(t)" "R(t)" "u₄(t)"]) ``` -## [Adding a VariableRateJump](@id VariableRateJumpSect) +Note, when using `ConstantRateJump`s, `MassActionJump`s, and bounded +`VariableRateJump`s, the ODE derivative function `f(du, u, p, t)` should not +modify any states in `du` that the corresponding jump rate functions depend on. +However, the opposite where jumps modify the ODE variables is allowed. If one +needs to change a component of `u` in the ODE for which a rate function is +dependent, then one must use a general `VariableRateJump` as described in the +next section. + +## [Adding a general VariableRateJump that Depends on a Continuously Evolving Variable](@id VariableRateJumpSect) Now let's consider adding a reaction whose rate changes continuously with the differential equation. To continue our example, let there be a new reaction with rate depending on `u[4]` of the form ``u_4 \to u_4 + \textrm{I}``, with a rate constant of `1e-2`: ```@example tut2 -rate3(u, p, t) = 1e-2 * u[4] -function affect3!(integrator) +rate5(u, p, t) = 1e-2 * u[4] +function affect5!(integrator) integrator.u[2] += 1 # I -> I + 1 nothing end -jump3 = VariableRateJump(rate3, affect3!) +jump5 = VariableRateJump(rate5, affect5!) ``` -Notice, since `rate3` depends on a variable that evolves continuously, and hence -is not constant between jumps, *we must use a `VariableRateJump`*. +Notice, since `rate5` depends on a variable that evolves continuously, and hence +is not constant between jumps, *we must use a general `VariableRateJump` without +upper/lower bounds*. Solving the equation is exactly the same: ```@example tut2 u₀ = [999.0, 10.0, 0.0, 1.0] prob = ODEProblem(f, u₀, tspan, p) -jump_prob = JumpProblem(prob, Direct(), jump, jump2, jump3) +jump_prob = JumpProblem(prob, Direct(), jump, jump2, jump5) sol = solve(jump_prob, Tsit5()) plot(sol; label=["S(t)" "I(t)" "R(t)" "u₄(t)"]) ``` -*Note that `VariableRateJump`s require using a continuous problem, like an -ODE/SDE/DDE/DAE problem, and using floating point initial conditions.* +*Note that general `VariableRateJump`s require using a continuous problem, like +an ODE/SDE/DDE/DAE problem, and using floating point initial conditions.* Lastly, we are not restricted to ODEs. For example, we can solve the same jump problem except with multiplicative noise on `u[4]` by using an `SDEProblem` @@ -588,12 +720,12 @@ function g(du, u, p, t) du[4] = 0.1u[4] end prob = SDEProblem(f, g, [999.0, 1.0, 0.0, 1.0], (0.0, 250.0), p) -jump_prob = JumpProblem(prob, Direct(), jump, jump2, jump3) +jump_prob = JumpProblem(prob, Direct(), jump, jump2, jump5) sol = solve(jump_prob, SRIW1()) plot(sol; label=["S(t)" "I(t)" "R(t)" "u₄(t)"]) ``` -For more details about `VariableRateJump`s see [Defining a Variable Rate +For more details about general `VariableRateJump`s see [Defining a Variable Rate Jump](@ref). diff --git a/docs/src/tutorials/jump_diffusion.md b/docs/src/tutorials/jump_diffusion.md index 3f037109..de889168 100644 --- a/docs/src/tutorials/jump_diffusion.md +++ b/docs/src/tutorials/jump_diffusion.md @@ -120,6 +120,12 @@ plot(sol) In this way we have solve a mixed jump-ODE, i.e. a piecewise deterministic Markov process. +Note that in this case, the rates of the `VariableRateJump`s depend on a +variable that is driven by an `ODEProblem`, and thus they would not satisfy the +conditions to be represented as bounded `VariableRateJump`s (and hence can not +be simulated with the `Coevolve` aggregator). + + ## Jump Diffusion Now we will finally solve the jump diffusion problem. The steps are the same as before, except we now start with a `SDEProblem` instead of an `ODEProblem`. diff --git a/docs/src/tutorials/simple_poisson_process.md b/docs/src/tutorials/simple_poisson_process.md index 85d07842..0f96bb49 100644 --- a/docs/src/tutorials/simple_poisson_process.md +++ b/docs/src/tutorials/simple_poisson_process.md @@ -6,14 +6,15 @@ primarily in chemical or population process models, where several types of jumps may occur, can skip directly to the [second tutorial](@ref ssa_tutorial) for a tutorial covering similar material but focused on the SIR model. -JumpProcesses allows the simulation of jump processes where the transition rate, i.e. -intensity or propensity, can be a function of the current solution, current +JumpProcesses allows the simulation of jump processes where the transition rate, +i.e. intensity or propensity, can be a function of the current solution, current parameters, and current time. Throughout this tutorial these are denoted by `u`, `p` and `t`. Likewise, when a jump occurs any DifferentialEquations.jl-compatible change to the current system state, as encoded by a [DifferentialEquations.jl integrator](https://docs.sciml.ai/DiffEqDocs/stable/basics/integrator/), is -allowed. This includes changes to the current state or to parameter values. +allowed. This includes changes to the current state or to parameter values (for +example via a callback). This tutorial requires several packages, which can be added if not already installed via @@ -29,7 +30,7 @@ default(; lw = 2) ``` ## `ConstantRateJump`s -Our first example will be to simulate a simple Poission counting process, +Our first example will be to simulate a simple Poisson counting process, ``N(t)``, with a constant transition rate of λ. We can interpret this as a birth process where new individuals are created at the constant rate λ. ``N(t)`` then gives the current population size. In terms of a unit Poisson counting process, @@ -61,7 +62,7 @@ sol = solve(jprob, SSAStepper()) plot(sol, label="N(t)", xlabel="t", legend=:bottomright) ``` -We can define and simulate our jump process using JumpProcesses. We first load our +We can define and simulate our jump process as follows. We first load our packages ```@example tut1 using JumpProcesses, Plots @@ -104,29 +105,28 @@ tspan = (0.0, 10.0) ``` Finally, we construct the associated SciML problem types and generate one realization of the process. We first create a `DiscreteProblem` to encode that -we are simulating a process that evolves in discrete time steps. Note, this -currently requires that the process has constant transition rates *between* -jumps +we are simulating a process that evolves in discrete time steps. ```@example tut1 dprob = DiscreteProblem(u₀, tspan, p) ``` We next create a [`JumpProblem`](@ref) that wraps the discrete problem, and -specifies which algorithm to use for determining next jump times (and in the -case of multiple possible jumps the next jump type). Here we use the classical -`Direct` method, proposed by Gillespie in the chemical reaction context, but -going back to earlier work by Doob and others (and also known as Kinetic Monte -Carlo in the physics literature) +specifies which algorithm, called an aggregator in JumpProcesses, to use for +determining next jump times (and in the case of multiple possible jumps the next +jump type). Here we use the classical `Direct` method, proposed by Gillespie in +the chemical reaction context, but going back to earlier work by Doob and others +(and also known as Kinetic Monte Carlo in the physics literature) ```@example tut1 # a jump problem, specifying we will use the Direct method to sample # jump times and events, and that our jump is encoded by crj jprob = JumpProblem(dprob, Direct(), crj) ``` -We are finally ready to simulate one realization of our jump process +We are finally ready to simulate one realization of our jump process, selecting +`SSAStepper` to handle time-stepping our system from jump to jump ```@example tut1 # now we simulate the jump process in time, using the SSAStepper time-stepper sol = solve(jprob, SSAStepper()) -plot(sol, label="N(t)", xlabel="t", legend=:bottomright) +plot(sol, labels = "N(t)", xlabel = "t", legend = :bottomright) ``` ### More general `ConstantRateJump`s @@ -157,11 +157,11 @@ second `ConstantRateJump`. We then construct the corresponding problems, passing both jumps to `JumpProblem`, and can solve as before ```@example tut1 p = (λ = 2.0, μ = 1.5) -u₀ = [0,0] # (N(0), D(0)) +u₀ = [0, 0] # (N(0), D(0)) dprob = DiscreteProblem(u₀, tspan, p) jprob = JumpProblem(dprob, Direct(), crj, deathcrj) sol = solve(jprob, SSAStepper()) -plot(sol, label=["N(t)" "D(t)"], xlabel="t", legend=:topleft) +plot(sol, labels = ["N(t)" "D(t)"], xlabel = "t", legend = :topleft) ``` In the next tutorial we will also introduce [`MassActionJump`](@ref)s, which are @@ -175,12 +175,13 @@ by adding or subtracting a constant vector from `u`. ## `VariableRateJump`s for processes that are not constant between jumps So far we have assumed that our jump processes have transition rates that are constant in between jumps. In many applications this may be a limiting -assumption. To support such models JumpProcesses has the [`VariableRateJump`](@ref) -type, which represents jump processes that have an arbitrary time dependence in -the calculation of the transition rate, including transition rates that depend -on states which can change in between `ConstantRateJump`s. Let's consider the -previous example, but now let the birth rate be time dependent, ``b(t) = \lambda -\left(\sin(\pi t / 2) + 1\right)``, so that our model becomes +assumption. To support such models JumpProcesses has the +[`VariableRateJump`](@ref) type, which represents jump processes that have an +arbitrary time dependence in the calculation of the transition rate, including +transition rates that depend on states which can change in between two jumps +occurring. Let's consider the previous example, but now let the birth rate be +time dependent, ``b(t) = \lambda \left(\sin(\pi t / 2) + 1\right)``, so that our +model becomes ```math \begin{align*} N(t) &= Y_b\left(\int_0^t \left( \lambda \sin\left(\tfrac{\pi s}{2}\right) + 1 \right) \, d s\right) - Y_d \left(\int_0^t \mu N(s^-) \, ds \right), \\ @@ -188,28 +189,87 @@ D(t) &= Y_d \left(\int_0^t \mu N(s^-) \, ds \right). \end{align*} ``` -We'll then re-encode the first jump as a -`VariableRateJump` + +The birth rate is cyclical, bounded between a lower-bound of ``λ`` and an +upper-bound of ``2 λ``. We'll then re-encode the first (birth) jump as a +`VariableRateJump`. Two types of `VariableRateJump`s are supported, general and +bounded. The latter are generally more performant, but are also more restrictive +in when they can be used. They also require specifying additional information +beyond just `rate` and `affect!` functions. + +Let's see how to build a bounded `VariableRateJump` encoding our new birth +process. We first specify the rate and affect functions, just like for a +`ConstantRateJump`, ```@example tut1 rate1(u,p,t) = p.λ * (sin(pi*t/2) + 1) affect1!(integrator) = (integrator.u[1] += 1) -vrj = VariableRateJump(rate1, affect1!) ``` -Because this new jump can modify the value of `u[1]` between death events, and -the death transition rate depends on this value, we must also update our death -jump process to also be a `VariableRateJump` +We next provide functions that determine a time interval over which the rate is +bounded from above given `u`, `p` and `t`. From these we can construct the new +bounded `VariableRateJump`: +```@example tut1 +# We require that rate1(u,p,s) <= urate(u,p,s) +# for t <= s <= t + rateinterval(u,p,t) +rateinterval(u, p, t) = typemax(t) +urate(u, p, t) = 2 * p.λ + +# Optionally, we can give a lower bound over the same interval. +# This may boost computational performance. +lrate(u, p, t) = p.λ + +# now we construct the bounded VariableRateJump +vrj1 = VariableRateJump(rate1, affect1!; lrate, urate, rateinterval) +``` + +Finally, to efficiently simulate the new jump process we must also specify a +dependency graph. This indicates when a given jump occurs, which jumps in the +system need to have their rates and/or rate bounds recalculated (for example, +due to depending on changed components in `u`). We also assume the convention +that a given jump depends on itself. Internally, JumpProcesses preserves the +relative ordering of jumps of each distinct type, but always reorders all +`ConstantRateJump`s to appear before any `VariableRateJump`s. As such, the +`ConstantRateJump` representing the death process will have internal index 1, +and our new bounded `VariableRateJump` for birth will have internal index 2. +Since birth modifies the population size `u[1]`, and death occurs at a rate +proportional to `u[1]`, when birth occurs we need to recalculate both of rates. +In contrast, death does not change `u[1]`, and so when death occurs we only need +to recalculate the death rate. The graph below encodes the dependents of the +death (`dep_graph[1]`) and birth (`dep_graph[2]`) jumps respectively +```@example tut1 +dep_graph = [[1], [1,2]] +``` + +We can then construct the corresponding problem, passing both jumps to +`JumpProblem` as well as the dependency graph. We must use an aggregator that +supports bounded `VariableRateJump`s, in this case we choose the `Coevolve` +aggregator. +```@example tut1 +jprob = JumpProblem(dprob, Coevolve(), vrj1, deathcrj; dep_graph) +sol = solve(jprob, SSAStepper()) +plot(sol, labels = ["N(t)" "D(t)"], xlabel = "t", legend = :topleft) +``` + +If we did not know the upper rate bound or rate interval functions for the +time-dependent rate, we would have to use a continuous problem type and general +`VariableRateJump` to correctly handle calculating the jump times. Under this +assumption we would define a general `VariableRateJump` as following: +```@example tut1 +vrj2 = VariableRateJump(rate1, affect1!) +``` + +Since the death rate now depends on a variable, `u[2]`, modified by a general +`VariableRateJump` (i.e. one that is not bounded), we also need to redefine the +death jump process as a general `VariableRateJump` ```@example tut1 deathvrj = VariableRateJump(deathrate, deathaffect!) ``` -Note, if the death rate only depended on values that were unchanged by a -variable rate jump, then it could have remained a `ConstantRateJump`. This would -have been the case if, for example, it depended on `u[2]` instead of `u[1]`. - -To simulate our jump process we now need to use a continuous problem type to -properly handle determining the jump times. We do this by constructing an -ordinary differential equation problem, `ODEProblem`, but setting the ODE -derivative to preserve the state (i.e. to zero). We are essentially defining a -combined ODE-jump process, i.e. a [piecewise deterministic Markov + +To simulate our jump process we now need to construct a continuous problem type +to couple the jumps to, for example an ordinary differential equation (ODE) or +stochastic differential equation (SDE). Let's use an ODE, encoded via an +`ODEProblem`. We simply set the ODE derivative to zero to preserve the state. We +are essentially defining a combined ODE-jump process, i.e. a [piecewise +deterministic Markov process](https://en.wikipedia.org/wiki/Piecewise-deterministic_Markov_process), but one where the ODE is trivial and does not change the state. To use this problem type and the ODE solvers we first load `OrdinaryDiffEq.jl` or @@ -225,7 +285,7 @@ using OrdinaryDiffEq # or using DifferentialEquations ``` We can then construct our ODE problem with a trivial ODE derivative component. -Note, to work with the ODE solver time stepper we must change our initial +Note, to work with the ODE solver time stepper we must also change our initial condition to be floating point valued ```@example tut1 function f!(du, u, p, t) @@ -234,15 +294,19 @@ function f!(du, u, p, t) end u₀ = [0.0, 0.0] oprob = ODEProblem(f!, u₀, tspan, p) -jprob = JumpProblem(oprob, Direct(), vrj, deathvrj) +jprob = JumpProblem(oprob, Direct(), vrj2, deathvrj) ``` -We simulate our jump process, using the `Tsit5` ODE solver as the time stepper in -place of `SSAStepper` +We can now simulate our jump process, using the `Tsit5` ODE solver as the time +stepper in place of `SSAStepper` ```@example tut1 sol = solve(jprob, Tsit5()) plot(sol, label=["N(t)" "D(t)"], xlabel="t", legend=:topleft) ``` +For more details on when bounded vs. general `VariableRateJump`s can be used, +see the [next tutorial](@ref ssa_tutorial) and the [Jump Problems](@ref +jump_problem_type) documentation page. + ## Having a Random Jump Distribution Suppose we want to simulate a compound Poisson process, ``G(t)``, where ```math @@ -283,4 +347,4 @@ dprob = DiscreteProblem(u₀, tspan, p) jprob = JumpProblem(dprob, Direct(), crj) sol = solve(jprob, SSAStepper()) plot(sol, label=["N(t)" "G(t)"], xlabel="t") -``` \ No newline at end of file +``` diff --git a/src/JumpProcesses.jl b/src/JumpProcesses.jl index 600be47a..0794d48f 100644 --- a/src/JumpProcesses.jl +++ b/src/JumpProcesses.jl @@ -50,6 +50,7 @@ include("aggregators/prioritytable.jl") include("aggregators/directcr.jl") include("aggregators/rssacr.jl") include("aggregators/rdirect.jl") +include("aggregators/coevolve.jl") # spatial: include("spatial/spatial_massaction_jump.jl") @@ -72,8 +73,8 @@ include("coupling.jl") include("SSA_stepper.jl") include("simple_regular_solve.jl") -export ConstantRateJump, VariableRateJump, RegularJump, MassActionJump, - JumpSet +export ConstantRateJump, VariableRateJump, RegularJump, + MassActionJump, JumpSet export JumpProblem @@ -83,6 +84,7 @@ export Direct, DirectFW, SortingDirect, DirectCR export BracketData, RSSA export FRM, FRMFW, NRM export RSSACR, RDirect +export Coevolve export get_num_majumps, needs_depgraph, needs_vartojumps_map diff --git a/src/SSA_stepper.jl b/src/SSA_stepper.jl index cf1feac9..dfbb945b 100644 --- a/src/SSA_stepper.jl +++ b/src/SSA_stepper.jl @@ -1,12 +1,13 @@ """ $(TYPEDEF) -Highly efficient integrator for pure jump problems that involve only -`ConstantRateJump`s and/or `MassActionJump`s. +Highly efficient integrator for pure jump problems that involve only `ConstantRateJump`s, +`MassActionJump`s, and/or `VariableRateJump`s *with rate bounds*. ## Notes -- Only works with `JumProblem`s defined from `DiscreteProblem`s. -- Only works with collections of `ConstantRateJump`s and `MassActionJump`s. +- Only works with `JumpProblem`s defined from `DiscreteProblem`s. +- Only works with collections of `ConstantRateJump`s, `MassActionJump`s, and + `VariableRateJump`s with rate bounds. - Only supports `DiscreteCallback`s for events. ## Examples diff --git a/src/aggregators/aggregators.jl b/src/aggregators/aggregators.jl index bc5e354f..0ec2e28a 100644 --- a/src/aggregators/aggregators.jl +++ b/src/aggregators/aggregators.jl @@ -8,9 +8,9 @@ tuples. Fastest for a small (total) number of `ConstantRateJump`s or `MassActionJump`s (~10). For larger numbers of possible jumps use other methods. -Gillespie, Daniel T. (1976). A General Method for Numerically Simulating the -Stochastic Time Evolution of Coupled Chemical Reactions. Journal of -Computational Physics. 22 (4): 403–434. doi:10.1016/0021-9991(76)90041-3. +Daniel T. Gillespie, A general method for numerically simulating the stochastic +time evolution of coupled chemical reactions, Journal of Computational Physics, +22 (4), 403–434 (1976). doi:10.1016/0021-9991(76)90041-3. """ struct Direct <: AbstractAggregatorAlgorithm end @@ -21,9 +21,9 @@ numbers of `ConstantRateJump`s. However, for such large numbers of jump different classes of aggregators are usually much more performant (i.e. `SortingDirect`, `DirectCR`, `RSSA` or `RSSACR`). -Gillespie, Daniel T. (1976). A General Method for Numerically Simulating the -Stochastic Time Evolution of Coupled Chemical Reactions. Journal of -Computational Physics. 22 (4): 403–434. doi:10.1016/0021-9991(76)90041-3. +Daniel T. Gillespie, A general method for numerically simulating the stochastic +time evolution of coupled chemical reactions, Journal of Computational Physics, +22 (4), 403–434 (1976). doi:10.1016/0021-9991(76)90041-3. """ struct DirectFW <: AbstractAggregatorAlgorithm end @@ -33,13 +33,13 @@ for systems with large numbers of jumps with special structure (for example a linear chain of reactions, or jumps corresponding to particles hopping on a grid or graph). -- A. Slepoy, A.P. Thompson and S.J. Plimpton, A constant-time kinetic Monte - Carlo algorithm for simulation of large biochemical reaction networks, Journal - of Chemical Physics, 128 (20), 205101 (2008). doi:10.1063/1.2919546 +A. Slepoy, A.P. Thompson and S.J. Plimpton, A constant-time kinetic Monte +Carlo algorithm for simulation of large biochemical reaction networks, Journal +of Chemical Physics, 128 (20), 205101 (2008). doi:10.1063/1.2919546. -- S. Mauch and M. Stalzer, Efficient formulations for exact stochastic - simulation of chemical systems, ACM Transactions on Computational Biology and - Bioinformatics, 8 (1), 27-35 (2010). doi:10.1109/TCBB.2009.47 +S. Mauch and M. Stalzer, Efficient formulations for exact stochastic +simulation of chemical systems, ACM Transactions on Computational Biology and +Bioinformatics, 8 (1), 27-35 (2010). doi:10.1109/TCBB.2009.47. """ struct DirectCR <: AbstractAggregatorAlgorithm end @@ -49,9 +49,9 @@ sized systems (tens of jumps), or systems where a few jumps occur much more frequently than others. J. M. McCollum, G. D. Peterson, C. D. Cox, M. L. Simpson and N. F. Samatova, The - sorting direct method for stochastic simulation of biochemical systems with - varying reaction execution behavior, Computational Biology and Chemistry, 30 - (1), 39049 (2006). doi:10.1016/j.compbiolchem.2005.10.007 +sorting direct method for stochastic simulation of biochemical systems with +varying reaction execution behavior, Computational Biology and Chemistry, 30 +(1), 39049 (2006). doi:10.1016/j.compbiolchem.2005.10.007. """ struct SortingDirect <: AbstractAggregatorAlgorithm end @@ -59,13 +59,13 @@ struct SortingDirect <: AbstractAggregatorAlgorithm end The Rejection SSA method. One of the best methods for systems with hundreds to many thousands of jumps (along with `RSSACR`) and sparse dependency graphs. -- V. H. Thanh, C. Priami and R. Zunino, Efficient rejection-based simulation of - biochemical reactions with stochastic noise and delays, Journal of Chemical - Physics, 141 (13), 134116 (2014). doi:10.1063/1.4896985 +V. H. Thanh, C. Priami and R. Zunino, Efficient rejection-based simulation of +biochemical reactions with stochastic noise and delays, Journal of Chemical +Physics, 141 (13), 134116 (2014). doi:10.1063/1.4896985 -- V. H. Thanh, R. Zunino and C. Priami, On the rejection-based algorithm for - simulation and analysis of large-scale reaction networks, Journal of Chemical - Physics, 142 (24), 244106 (2015). doi:10.1063/1.4922923 +V. H. Thanh, R. Zunino and C. Priami, On the rejection-based algorithm for +simulation and analysis of large-scale reaction networks, Journal of Chemical +Physics, 142 (24), 244106 (2015). doi:10.1063/1.4922923. """ struct RSSA <: AbstractAggregatorAlgorithm end @@ -73,15 +73,15 @@ struct RSSA <: AbstractAggregatorAlgorithm end The Rejection SSA Composition-Rejection method. Often the best performer for systems with tens of thousands of jumps and sparse depedency graphs. -V. H. Thanh, R. Zunino, and C. Priami, Efficient Constant-Time Complexity -Algorithm for Stochastic Simulation of Large Reaction Networks, IEEE/ACM -Transactions on Computational Biology and Bioinformatics, Vol. 14, No. 3, -657-667 (2017). +V. H. Thanh, R. Zunino, and C. Priami, Efficient constant-time complexity +algorithm for stochastic simulation of large reaction networks, IEEE/ACM +Transactions on Computational Biology and Bioinformatics, 14 (3), 657-667 +(2017). doi:10.1109/TCBB.2016.2530066. """ struct RSSACR <: AbstractAggregatorAlgorithm end """ -A rejection-based direct method. +A rejection-based direct method. """ struct RDirect <: AbstractAggregatorAlgorithm end @@ -91,9 +91,9 @@ struct RDirect <: AbstractAggregatorAlgorithm end Gillespie's First Reaction Method. Should not be used for practical applications due to slow performance relative to all other methods. -Gillespie, Daniel T. (1976). A General Method for Numerically Simulating the -Stochastic Time Evolution of Coupled Chemical Reactions. Journal of -Computational Physics. 22 (4): 403–434. doi:10.1016/0021-9991(76)90041-3. +Daniel T. Gillespie, A general method for numerically simulating the stochastic +time evolution of coupled chemical reactions, Journal of Computational Physics, +22 (4), 403–434 (1976). doi:10.1016/0021-9991(76)90041-3. """ struct FRM <: AbstractAggregatorAlgorithm end @@ -102,9 +102,9 @@ Gillespie's First Reaction Method with `FunctionWrappers` for handling `ConstantRateJump`s. Should not be used for practical applications due to slow performance relative to all other methods. -Gillespie, Daniel T. (1976). A General Method for Numerically Simulating the -Stochastic Time Evolution of Coupled Chemical Reactions. Journal of -Computational Physics. 22 (4): 403–434. doi:10.1016/0021-9991(76)90041-3. +Daniel T. Gillespie, A general method for numerically simulating the stochastic +time evolution of coupled chemical reactions, Journal of Computational Physics, +22 (4), 403–434 (1976). doi:10.1016/0021-9991(76)90041-3. """ struct FRMFW <: AbstractAggregatorAlgorithm end @@ -115,10 +115,23 @@ one of `DirectCR`, `RSSA`, or `RSSACR` for such systems. M. A. Gibson and J. Bruck, Efficient exact stochastic simulation of chemical systems with many species and many channels, Journal of Physical Chemistry A, -104 (9), 1876-1889 (2000). doi:10.1021/jp993732q +104 (9), 1876-1889 (2000). doi:10.1021/jp993732q. """ struct NRM <: AbstractAggregatorAlgorithm end +""" +An adaptaton of the COEVOLVE algorithm for simulating any compound jump process +that evolves through time. This method handles variable intensity rates with +user-defined bounds and inter-dependent processes. It reduces to NRM when rates +are constant. + +M. Farajtabar, Y. Wang, M. Gomez-Rodriguez, S. Li, H. Zha, and L. Song, +COEVOLVE: a joint point process model for information diffusion and network +evolution, Journal of Machine Learning Research 18(1), 1305–1353 (2017). doi: +10.5555/3122009.3122050. +""" +struct Coevolve <: AbstractAggregatorAlgorithm end + # spatial methods """ @@ -128,8 +141,8 @@ determine where on the grid/graph the next jump occurs, and then the `Direct` method to determine which jump at the given location occurs. Elf, Johan and Ehrenberg, M, Spontaneous separation of bi-stable biochemical -systems into spatial domains of opposite phases,Systems Biology, 2004 vol. 1(2) -pp. 230-236. doi:10.1049/sb:20045021 +systems into spatial domains of opposite phases,Systems Biology, 1(2), 230-236 +(2004). doi:10.1049/sb:20045021. """ struct NSM <: AbstractAggregatorAlgorithm end @@ -138,14 +151,14 @@ The Direct Composition-Rejection Direct method. Uses the `DirectCR` method to determine where on the grid/graph a jump occurs, and the `Direct` method to determine which jump occurs at the sampled location. -Constant-complexity stochastic simulation algorithm with optimal binning, Kevin -R. Sanft and Hans G. Othmer, Journal of Chemical Physics 143, 074108 (2015); -doi: 10.1063/1.4928635 +Kevin R. Sanft and Hans G. Othmer, Constant-complexity stochastic simulation +algorithm with optimal binning, Journal of Chemical Physics 143, 074108 +(2015). doi: 10.1063/1.4928635. """ struct DirectCRDirect <: AbstractAggregatorAlgorithm end const JUMP_AGGREGATORS = (Direct(), DirectFW(), DirectCR(), SortingDirect(), RSSA(), FRM(), - FRMFW(), NRM(), RSSACR(), RDirect()) + FRMFW(), NRM(), RSSACR(), RDirect(), Coevolve()) # For JumpProblem construction without an aggregator struct NullAggregator <: AbstractAggregatorAlgorithm end @@ -156,6 +169,7 @@ needs_depgraph(aggregator::DirectCR) = true needs_depgraph(aggregator::SortingDirect) = true needs_depgraph(aggregator::NRM) = true needs_depgraph(aggregator::RDirect) = true +needs_depgraph(aggregator::Coevolve) = true # true if aggregator requires a map from solution variable to dependent jumps. # It is implicitly assumed these aggregators also require the reverse map, from @@ -164,6 +178,10 @@ needs_vartojumps_map(aggregator::AbstractAggregatorAlgorithm) = false needs_vartojumps_map(aggregator::RSSA) = true needs_vartojumps_map(aggregator::RSSACR) = true +# true if aggregator supports variable rates +supports_variablerates(aggregator::AbstractAggregatorAlgorithm) = false +supports_variablerates(aggregator::Coevolve) = true + is_spatial(aggregator::AbstractAggregatorAlgorithm) = false is_spatial(aggregator::NSM) = true is_spatial(aggregator::DirectCRDirect) = true diff --git a/src/aggregators/coevolve.jl b/src/aggregators/coevolve.jl new file mode 100644 index 00000000..84cf3cac --- /dev/null +++ b/src/aggregators/coevolve.jl @@ -0,0 +1,216 @@ +""" +Queue method. This method handles variable intensity rates. +""" +mutable struct CoevolveJumpAggregation{T, S, F1, F2, RNG, GR, PQ} <: + AbstractSSAJumpAggregator + next_jump::Int # the next jump to execute + prev_jump::Int # the previous jump that was executed + next_jump_time::T # the time of the next jump + end_time::T # the time to stop a simulation + cur_rates::Vector{T} # the last computed upper bound for each rate + sum_rate::Nothing # not used + ma_jumps::S # MassActionJumps + rates::F1 # vector of rate functions + affects!::F2 # vector of affect functions for VariableRateJumps + save_positions::Tuple{Bool, Bool} # tuple for whether to save the jumps before and/or after event + rng::RNG # random number generator + dep_gr::GR # map from jumps to jumps depending on it + pq::PQ # priority queue of next time + lrates::F1 # vector of rate lower bound functions + urates::F1 # vector of rate upper bound functions + rateintervals::F1 # vector of interval length functions + haslratevec::Vector{Bool} # vector of whether an lrate was provided for this vrj +end + +function CoevolveJumpAggregation(nj::Int, njt::T, et::T, crs::Vector{T}, sr::Nothing, + maj::S, rs::F1, affs!::F2, sps::Tuple{Bool, Bool}, + rng::RNG; u::U, dep_graph = nothing, lrates, urates, + rateintervals, haslratevec) where {T, S, F1, F2, RNG, U} + if dep_graph === nothing + if (get_num_majumps(maj) == 0) || !isempty(rs) + error("To use Coevolve a dependency graph between jumps must be supplied.") + else + dg = make_dependency_graph(length(u), maj) + end + else + # using a Set to ensure that edges are not duplicate + dgsets = [Set{Int}(append!(Int[], jumps, [var])) + for (var, jumps) in enumerate(dep_graph)] + dg = [sort!(collect(i)) for i in dgsets] + end + + num_jumps = get_num_majumps(maj) + length(urates) + + if length(dg) != num_jumps + error("Number of nodes in the dependency graph must be the same as the number of jumps.") + end + + pq = MutableBinaryMinHeap{T}() + CoevolveJumpAggregation{T, S, F1, F2, RNG, typeof(dg), + typeof(pq)}(nj, nj, njt, et, crs, sr, maj, rs, affs!, sps, rng, + dg, pq, lrates, urates, rateintervals, haslratevec) +end + +# creating the JumpAggregation structure (tuple-based variable jumps) +function aggregate(aggregator::Coevolve, u, p, t, end_time, constant_jumps, + ma_jumps, save_positions, rng; dep_graph = nothing, + variable_jumps = nothing, kwargs...) + AffectWrapper = FunctionWrappers.FunctionWrapper{Nothing, Tuple{Any}} + RateWrapper = FunctionWrappers.FunctionWrapper{typeof(t), + Tuple{typeof(u), typeof(p), typeof(t)}} + + ncrjs = (constant_jumps === nothing) ? 0 : length(constant_jumps) + nvrjs = (variable_jumps === nothing) ? 0 : length(variable_jumps) + nrjs = ncrjs + nvrjs + affects! = Vector{AffectWrapper}(undef, nrjs) + rates = Vector{RateWrapper}(undef, nvrjs) + lrates = similar(rates) + rateintervals = similar(rates) + urates = Vector{RateWrapper}(undef, nrjs) + haslratevec = zeros(Bool, nvrjs) + + idx = 1 + if constant_jumps !== nothing + for crj in constant_jumps + affects![idx] = AffectWrapper(integ -> (crj.affect!(integ); nothing)) + urates[idx] = RateWrapper(crj.rate) + idx += 1 + end + end + + if variable_jumps !== nothing + for (i, vrj) in enumerate(variable_jumps) + affects![idx] = AffectWrapper(integ -> (vrj.affect!(integ); nothing)) + urates[idx] = RateWrapper(vrj.urate) + idx += 1 + rates[i] = RateWrapper(vrj.rate) + rateintervals[i] = RateWrapper(vrj.rateinterval) + haslratevec[i] = haslrate(vrj) + lrates[i] = haslratevec[i] ? RateWrapper(vrj.lrate) : RateWrapper(nullrate) + end + end + + num_jumps = get_num_majumps(ma_jumps) + nrjs + cur_rates = Vector{typeof(t)}(undef, num_jumps) + sum_rate = nothing + next_jump = 0 + next_jump_time = typemax(t) + CoevolveJumpAggregation(next_jump, next_jump_time, end_time, cur_rates, sum_rate, + ma_jumps, rates, affects!, save_positions, rng; + u, dep_graph, lrates, urates, rateintervals, haslratevec) +end + +# set up a new simulation and calculate the first jump / jump time +function initialize!(p::CoevolveJumpAggregation, integrator, u, params, t) + p.end_time = integrator.sol.prob.tspan[2] + fill_rates_and_get_times!(p, u, params, t) + generate_jumps!(p, integrator, u, params, t) + nothing +end + +# execute one jump, changing the system state +function execute_jumps!(p::CoevolveJumpAggregation, integrator, u, params, t) + # execute jump + u = update_state!(p, integrator, u) + # update current jump rates and times + update_dependent_rates!(p, u, params, t) + nothing +end + +# calculate the next jump / jump time +function generate_jumps!(p::CoevolveJumpAggregation, integrator, u, params, t) + p.next_jump_time, p.next_jump = top_with_handle(p.pq) + nothing +end + +######################## SSA specific helper routines ######################## +function update_dependent_rates!(p::CoevolveJumpAggregation, u, params, t) + @inbounds deps = p.dep_gr[p.next_jump] + @unpack cur_rates, end_time, pq = p + for (ix, i) in enumerate(deps) + ti, last_urate_i = next_time(p, u, params, t, i, end_time) + update!(pq, i, ti) + @inbounds cur_rates[i] = last_urate_i + end + nothing +end + +@inline function get_ma_urate(p::CoevolveJumpAggregation, i, u, params, t) + return evalrxrate(u, i, p.ma_jumps) +end + +@inline function get_urate(p::CoevolveJumpAggregation, uidx, u, params, t) + @inbounds return p.urates[uidx](u, params, t) +end + +@inline function get_rateinterval(p::CoevolveJumpAggregation, lidx, u, params, t) + @inbounds return p.rateintervals[lidx](u, params, t) +end + +@inline function get_lrate(p::CoevolveJumpAggregation, lidx, u, params, t) + @inbounds return p.lrates[lidx](u, params, t) +end + +@inline function get_rate(p::CoevolveJumpAggregation, lidx, u, params, t) + @inbounds return p.rates[lidx](u, params, t) +end + +function next_time(p::CoevolveJumpAggregation{T}, u, params, t, i, tstop::T) where {T} + @unpack rng, haslratevec = p + num_majumps = get_num_majumps(p.ma_jumps) + num_cjumps = length(p.urates) - length(p.rates) + uidx = i - num_majumps + lidx = uidx - num_cjumps + urate = uidx > 0 ? get_urate(p, uidx, u, params, t) : get_ma_urate(p, i, u, params, t) + last_urate = p.cur_rates[i] + if i != p.next_jump && last_urate > zero(t) + s = urate == zero(t) ? typemax(t) : last_urate / urate * (p.pq[i] - t) + else + s = urate == zero(t) ? typemax(t) : randexp(rng) / urate + end + _t = t + s + if lidx > 0 + while t < tstop + rateinterval = get_rateinterval(p, lidx, u, params, t) + if s > rateinterval + t = t + rateinterval + urate = get_urate(p, uidx, u, params, t) + s = urate == zero(t) ? typemax(t) : randexp(rng) / urate + _t = t + s + continue + end + (_t >= tstop) && break + + lrate = haslratevec[lidx] ? get_lrate(p, lidx, u, params, t) : zero(t) + if lrate < urate + # when the lower and upper bound are the same, then v < 1 = lrate / urate = urate / urate + v = rand(rng) * urate + # first inequality is less expensive and short-circuits the evaluation + if (v > lrate) && (v > get_rate(p, lidx, u, params, _t)) + t = _t + urate = get_urate(p, uidx, u, params, t) + s = urate == zero(t) ? typemax(t) : randexp(rng) / urate + _t = t + s + continue + end + elseif lrate > urate + error("The lower bound should be lower than the upper bound rate for t = $(t) and i = $(i), but lower bound = $(lrate) > upper bound = $(urate)") + end + break + end + end + return _t, urate +end + +# reevaulate all rates, recalculate all jump times, and reinit the priority queue +function fill_rates_and_get_times!(p::CoevolveJumpAggregation, u, params, t) + @unpack end_time = p + num_jumps = get_num_majumps(p.ma_jumps) + length(p.urates) + p.cur_rates = zeros(typeof(t), num_jumps) + jump_times = Vector{typeof(t)}(undef, num_jumps) + @inbounds for i in 1:num_jumps + jump_times[i], p.cur_rates[i] = next_time(p, u, params, t, i, end_time) + end + p.pq = MutableBinaryMinHeap(jump_times) + nothing +end diff --git a/src/jumps.jl b/src/jumps.jl index 7f2a84e1..c51790f5 100644 --- a/src/jumps.jl +++ b/src/jumps.jl @@ -33,45 +33,120 @@ end """ $(TYPEDEF) -Defines a jump process with a rate (i.e. hazard, intensity, or propensity) that -may explicitly depend on time. More precisely, one where the rate function is -allowed to change *between* the occurrence of jumps. For detailed examples and -usage information see the +Defines a jump process with a rate (i.e. hazard, intensity, or propensity) that may +explicitly depend on time. More precisely, one where the rate function is allowed to change +*between* the occurrence of jumps. For detailed examples and usage information see the - [Tutorial](https://docs.sciml.ai/JumpProcesses/stable/tutorials/discrete_stochastic_example/) +Note that two types of `VariableRateJump`s are currently supported, with different +performance charactertistics. +- A general `VariableRateJump` or `VariableRateJump` will refer to one in which only `rate` + and `affect` functions are specified. + + * These are the most general in what they can represent, but require the use of an + `ODEProblem` or `SDEProblem` whose underlying timestepper handles their evolution in + time (via the callback interface). + * This is the least performant jump type in simulations. + +- Bounded `VariableRateJump`s require passing the keyword arguments `urate` and + `rateinterval`, corresponding to functions `urate(u, p, t)` and `rateinterval(u, p, t)`, + see below. These must calculate a time window over which the rate function is bounded by a + constant. Note that it is ok if the rate bound would be violated within the time interval + due to a change in `u` arising from another `ConstantRateJump`, `MassActionJump` or + *bounded* `VariableRateJump` being executed, as the chosen aggregator will then handle + recalculating the rate bound and interval. *However, if the bound could be violated within + the time interval due to a change in `u` arising from continuous dynamics such as a + coupled ODE, SDE, or a general `VariableRateJump`, bounds should not be given.* This + ensures the jump is classified as a general `VariableRateJump` and properly handled. One + can also optionally provide a lower bound function, `lrate(u, p, t)`, via the `lrate` + keyword argument. This can lead to increased performance. The validity of the lower bound + should hold under the same conditions and rate interval as `urate`. + + * Bounded `VariableRateJump`s can currently be used in the `Coevolve` aggregator, and + can therefore be efficiently simulated in pure-jump `DiscreteProblem`s using the + `SSAStepper` time-stepper. + * These can be substantially more performant than general `VariableRateJump`s without + the rate bound functions. + +Reemphasizing, the additional user provided functions leveraged by bounded +`VariableRateJumps`, `urate(u, p, t)`, `rateinterval(u, p, t)`, and the optional `lrate(u, +p, t)` require that +- For `s` in `[t, t + rateinterval(u, p, t)]`, we have that `lrate(u, p, t) <= rate(u, p, s) + <= urate(u, p, t)`. +- It is ok if these bounds would be violated during the time window due to another + `ConstantRateJump`, `MassActionJump` or bounded `VariableRateJump` occurring, however, + they must remaing valid if `u` changes for any other reason (for example, due to + continuous dynamics like ODEs, SDEs, or general `VariableRateJump`s). + ## Fields $(FIELDS) ## Examples -Suppose `u[1]` gives the amount of particles and `t*p[1]` the probability per -time each particle can decay away. A corresponding `VariableRateJump` for this -jump process is +Suppose `u[1]` gives the amount of particles and `t*p[1]` the probability per time each +particle can decay away. A corresponding `VariableRateJump` for this jump process is ```julia rate(u,p,t) = t*p[1]*u[1] affect!(integrator) = integrator.u[1] -= 1 -crj = VariableRateJump(rate, affect!) +vrj = VariableRateJump(rate, affect!) +``` + +To define a bounded `VariableRateJump` that can be used with supporting aggregators such as +`Coevolve`, we must define bounds and a rate interval: +```julia +rateinterval(u,p,t) = (1 / p[1]) * 2 +rate(u,p,t) = t * p[1] * u[1] +lrate(u, p, t) = rate(u, p, t) +urate(u,p,t) = rate(u, p, t + rateinterval(u,p,t)) +affect!(integrator) = integrator.u[1] -= 1 +vrj = VariableRateJump(rate, affect!; lrate = lrate, urate = urate, + rateinterval = rateinterval) ``` ## Notes -- **`VariableRateJump`s result in `integrator`s storing an effective state type - that wraps the main state vector.** See [`ExtendedJumpArray`](@ref) for - details on using this object. Note that the presence of *any* - `VariableRateJump`s will result in all `ConstantRateJump`, `VariableRateJump` - and callback `affect!` functions receiving an integrator with `integrator.u` - an [`ExtendedJumpArray`](@ref). -- Must be used with `ODEProblem`s or `SDEProblem`s to be correctly simulated - (i.e. can not currently be used with `DiscreteProblem`s). -- Salis H., Kaznessis Y., Accurate hybrid stochastic simulation of a system of - coupled chemical or biochemical reactions, Journal of Chemical Physics, 122 - (5), DOI:10.1063/1.1835951 is used for calculating jump times with - `VariableRateJump`s within ODE/SDE integrators. +- When using an aggregator that supports bounded `VariableRateJump`s, `DiscreteProblem` can + be used. Otherwise, `ODEProblem` or `SDEProblem` must be used. +- **When not using aggregators that support bounded `VariableRateJump`s, or when there are + general `VariableRateJump`s, `integrator`s store an effective state type that wraps the + main state vector.** See [`ExtendedJumpArray`](@ref) for details on using this object. In + this case all `ConstantRateJump`, `VariableRateJump` and callback `affect!` functions + receive an integrator with `integrator.u` an [`ExtendedJumpArray`](@ref). +- Salis H., Kaznessis Y., Accurate hybrid stochastic simulation of a system of coupled + chemical or biochemical reactions, Journal of Chemical Physics, 122 (5), + DOI:10.1063/1.1835951 is used for calculating jump times with `VariableRateJump`s within + ODE/SDE integrators. """ -struct VariableRateJump{R, F, I, T, T2} <: AbstractJump - """Function `rate(u,p,t)` that returns the jump's current rate.""" +struct VariableRateJump{R, F, R2, R3, R4, I, T, T2} <: AbstractJump + """Function `rate(u,p,t)` that returns the jump's current rate given state + `u`, parameters `p` and time `t`.""" rate::R - """Function `affect(integrator)` that updates the state for one occurrence of the jump.""" + """Function `affect!(integrator)` that updates the state for one occurrence + of the jump given `integrator`.""" affect!::F + """Optional function `lrate(u, p, t)` that computes a lower bound on the rate in the + interval `t` to `t + rateinterval(u, p, t)` at time `t` given state `u` and parameters + `p`. This bound must rigorously hold during the time interval as long as another + `ConstantRateJump`, `MassActionJump`, or *bounded* `VariableRateJump` has not been + sampled. When using aggregators that support bounded `VariableRateJump`s, currently only + `Coevolve`, providing a lower-bound can lead to improved performance. + """ + lrate::R2 + """Optional function `urate(u, p, t)` for general `VariableRateJump`s, but is required + to define a bounded `VariableRateJump`, which can be used with supporting aggregators, + currently only `Coevolve`, and offers improved computational performance. Computes an + upper bound for the rate in the interval `t` to `t + rateinterval(u, p, t)` at time `t` + given state `u` and parameters `p`. This bound must rigorously hold during the time + interval as long as another `ConstantRateJump`, `MassActionJump`, or *bounded* + `VariableRateJump` has not been sampled. """ + urate::R3 + """Optional function `rateinterval(u, p, t)` for general `VariableRateJump`s, but is + required to define a bounded `VariableRateJump`, which can be used with supporting + aggregators, currently only `Coevolve`, and offers improved computational performance. + Computes the time interval from time `t` over which the `urate` and `lrate` bounds will + hold, `t` to `t + rateinterval(u, p, t)`, given state `u` and parameters `p`. This bound + must rigorously hold during the time interval as long as another `ConstantRateJump`, + `MassActionJump`, or *bounded* `VariableRateJump` has not been sampled. """ + rateinterval::R4 idxs::I rootfind::Bool interp_points::Int @@ -80,17 +155,47 @@ struct VariableRateJump{R, F, I, T, T2} <: AbstractJump reltol::T2 end +isbounded(::VariableRateJump) = true +isbounded(::VariableRateJump{R, F, R2, Nothing}) where {R, F, R2} = false +haslrate(::VariableRateJump) = true +haslrate(::VariableRateJump{R, F, Nothing}) where {R, F} = false +nullrate(u, p, t::T) where {T <: Number} = zero(T) + +""" +``` +function VariableRateJump(rate, affect!; lrate = nothing, urate = nothing, + rateinterval = nothing, rootfind = true, + idxs = nothing, + save_positions = (false, true), + interp_points = 10, + abstol = 1e-12, reltol = 0) +``` +""" function VariableRateJump(rate, affect!; + lrate = nothing, urate = nothing, + rateinterval = nothing, rootfind = true, idxs = nothing, - rootfind = true, - save_positions = (true, true), + save_positions = (false, true), interp_points = 10, abstol = 1e-12, reltol = 0) - VariableRateJump(rate, affect!, idxs, - rootfind, interp_points, - save_positions, abstol, reltol) + if !(urate !== nothing && rateinterval !== nothing) && + !(urate === nothing && rateinterval === nothing) + error("`urate` and `rateinterval` must both be `nothing`, or must both be defined.") + end + + if lrate !== nothing + (urate !== nothing) || + error("If a lower bound rate, `lrate`, is given than an upper bound rate, `urate`, and rate interval, `rateinterval`, must also be provided.") + end + + VariableRateJump(rate, affect!, lrate, urate, rateinterval, idxs, rootfind, + interp_points, save_positions, abstol, reltol) end +""" +$(TYPEDEF) + +""" struct RegularJump{iip, R, C, MD} rate::R c::C @@ -128,8 +233,7 @@ action form, offering improved performance within jump algorithms compared to - [Tutorial](https://docs.sciml.ai/JumpProcesses/stable/tutorials/discrete_stochastic_example/) ### Constructors -- `MassActionJump(reactant_stoich, net_stoich; scale_rates=true, - param_idxs=nothing)` +- `MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs = nothing)` Here `reactant_stoich` denotes the reactant stoichiometry for each reaction and `net_stoich` the net stoichiometry for each reaction. @@ -139,12 +243,12 @@ Here `reactant_stoich` denotes the reactant stoichiometry for each reaction and $(FIELDS) ## Keyword Arguments -- `scale_rates=true`, whether to rescale the reaction rate constants according +- `scale_rates = true`, whether to rescale the reaction rate constants according to the stoichiometry. -- `nocopy=false`, whether the `MassActionJump` can alias the `scaled_rates` and +- `nocopy = false`, whether the `MassActionJump` can alias the `scaled_rates` and `reactant_stoch` from the input. Note, if `scale_rates=true` this will potentially modify both of these. -- `param_idxs=nothing`, indexes in the parameter vector, `JumpProblem.prob.p`, +- `param_idxs = nothing`, indexes in the parameter vector, `JumpProblem.prob.p`, that correspond to each reaction's rate. See the tutorial and main docs for details. @@ -384,7 +488,7 @@ struct JumpSet{T1, T2, T3, T4} <: AbstractJump variable_jumps::T1 """Collection of [`ConstantRateJump`](@ref)s""" constant_jumps::T2 - """Collection of `RegularJump`s""" + """Collection of [`RegularJump`](@ref)s""" regular_jump::T3 """Collection of [`MassActionJump`](@ref)s""" massaction_jump::T4 @@ -424,6 +528,27 @@ function JumpSet(vjs, cjs, rj, majv::Vector{T}) where {T <: MassActionJump} end @inline get_num_majumps(jset::JumpSet) = get_num_majumps(jset.massaction_jump) +@inline num_majumps(jset::JumpSet) = get_num_majumps(jset) + +@inline function num_crjs(jset::JumpSet) + (jset.constant_jumps !== nothing) ? length(jset.constant_jumps) : 0 +end + +@inline function num_vrjs(jset::JumpSet) + (jset.variable_jumps !== nothing) ? length(jset.variable_jumps) : 0 +end + +@inline function num_bndvrjs(jset::JumpSet) + (jset.variable_jumps !== nothing) ? count(isbounded, jset.variable_jumps) : 0 +end + +@inline function num_continvrjs(jset::JumpSet) + (jset.variable_jumps !== nothing) ? count(!isbounded, jset.variable_jumps) : 0 +end + +num_jumps(jset::JumpSet) = num_majumps(jset) + num_crjs(jset) + num_vrjs(jset) +num_discretejumps(jset::JumpSet) = num_majumps(jset) + num_crjs(jset) + num_bndvrjs(jset) +num_cdiscretejumps(jset::JumpSet) = num_majumps(jset) + num_crjs(jset) @inline split_jumps(vj, cj, rj, maj) = vj, cj, rj, maj @inline function split_jumps(vj, cj, rj, maj, v::VariableRateJump, args...) @@ -550,10 +675,10 @@ function massaction_jump_combine(maj1::MassActionJump, maj2::MassActionJump) end ##### helper methods for unpacking rates and affects! from constant jumps ##### -function get_jump_info_tuples(constant_jumps) - if (constant_jumps !== nothing) && !isempty(constant_jumps) - rates = ((c.rate for c in constant_jumps)...,) - affects! = ((c.affect! for c in constant_jumps)...,) +function get_jump_info_tuples(jumps) + if (jumps !== nothing) && !isempty(jumps) + rates = ((c.rate for c in jumps)...,) + affects! = ((c.affect! for c in jumps)...,) else rates = () affects! = () diff --git a/src/problem.jl b/src/problem.jl index 92feb656..c1c03435 100644 --- a/src/problem.jl +++ b/src/problem.jl @@ -50,6 +50,9 @@ $(FIELDS) the jump occurs. - `spatial_system`, for spatial problems the underlying spatial structure. - `hopping_constants`, for spatial problems the spatial transition rate coefficients. +- `use_vrj_bounds = true`, set to false to disable handling bounded `VariableRateJump`s + with a supporting aggregator (such as `Coevolve`). They will then be handled via the + continuous integration interace, and treated like general `VariableRateJump`s. Please see the [tutorial page](https://docs.sciml.ai/JumpProcesses/stable/tutorials/discrete_stochastic_example/) in the @@ -166,7 +169,7 @@ function JumpProblem(prob, aggregator::AbstractAggregatorAlgorithm, jumps::JumpS (false, true) : (true, true), rng = DEFAULT_RNG, scale_rates = true, useiszero = true, spatial_system = nothing, hopping_constants = nothing, - callback = nothing, kwargs...) + callback = nothing, use_vrj_bounds = true, kwargs...) # initialize the MassActionJump rate constants with the user parameters if using_params(jumps.massaction_jump) @@ -182,48 +185,70 @@ function JumpProblem(prob, aggregator::AbstractAggregatorAlgorithm, jumps::JumpS ## Spatial jumps handling if spatial_system !== nothing && hopping_constants !== nothing && - !is_spatial(aggregator) # check if need to flatten + !is_spatial(aggregator) + (num_crjs(jumps) == num_vrjs(jumps) == 0) || + error("Spatial aggregators only support MassActionJumps currently.") prob, maj = flatten(maj, prob, spatial_system, hopping_constants; kwargs...) end - ## Constant Rate Handling + + if is_spatial(aggregator) + (num_crjs(jumps) == num_vrjs(jumps) == 0) || + error("Spatial aggregators only support MassActionJumps currently.") + kwargs = merge((; hopping_constants, spatial_system), kwargs) + end + + ndiscjumps = get_num_majumps(maj) + num_crjs(jumps) + + # separate bounded variable rate jumps *if* the aggregator can use them + if use_vrj_bounds && supports_variablerates(aggregator) && (num_bndvrjs(jumps) > 0) + bvrjs = filter(isbounded, jumps.variable_jumps) + cvrjs = filter(!isbounded, jumps.variable_jumps) + kwargs = merge((; variable_jumps = bvrjs), kwargs) + ndiscjumps += length(bvrjs) + else + bvrjs = nothing + cvrjs = jumps.variable_jumps + end + t, end_time, u = prob.tspan[1], prob.tspan[2], prob.u0 - if (typeof(jumps.constant_jumps) <: Tuple{}) && (maj === nothing) && - !is_spatial(aggregator) # check if there are no jumps - disc = nothing + + # handle majs, crjs, and bounded vrjs + if (ndiscjumps == 0) && !is_spatial(aggregator) + disc_agg = nothing constant_jump_callback = CallbackSet() else - disc = aggregate(aggregator, u, prob.p, t, end_time, jumps.constant_jumps, maj, - save_positions, rng; spatial_system = spatial_system, - hopping_constants = hopping_constants, kwargs...) - constant_jump_callback = DiscreteCallback(disc) + disc_agg = aggregate(aggregator, u, prob.p, t, end_time, jumps.constant_jumps, maj, + save_positions, rng; kwargs...) + constant_jump_callback = DiscreteCallback(disc_agg) end - iip = isinplace_jump(prob, jumps.regular_jump) - - ## Variable Rate Handling - if typeof(jumps.variable_jumps) <: Tuple{} + # handle any remaining vrjs + if length(cvrjs) > 0 + new_prob = extend_problem(prob, cvrjs; rng) + variable_jump_callback = build_variable_callback(CallbackSet(), 0, cvrjs...; rng) + cont_agg = cvrjs + else new_prob = prob variable_jump_callback = CallbackSet() - else - new_prob = extend_problem(prob, jumps; rng = rng) - variable_jump_callback = build_variable_callback(CallbackSet(), 0, - jumps.variable_jumps...; rng = rng) + cont_agg = JumpSet().variable_jumps end - jump_cbs = CallbackSet(constant_jump_callback, variable_jump_callback) + jump_cbs = CallbackSet(constant_jump_callback, variable_jump_callback) + iip = isinplace_jump(prob, jumps.regular_jump) solkwargs = make_kwarg(; callback) + JumpProblem{iip, typeof(new_prob), typeof(aggregator), - typeof(jump_cbs), typeof(disc), - typeof(jumps.variable_jumps), + typeof(jump_cbs), typeof(disc_agg), + typeof(cont_agg), typeof(jumps.regular_jump), - typeof(maj), typeof(rng), typeof(solkwargs)}(new_prob, aggregator, disc, - jump_cbs, jumps.variable_jumps, + typeof(maj), typeof(rng), typeof(solkwargs)}(new_prob, aggregator, disc_agg, + jump_cbs, cont_agg, jumps.regular_jump, maj, rng, solkwargs) end function extend_problem(prob::DiffEqBase.AbstractDiscreteProblem, jumps; rng = DEFAULT_RNG) - error("VariableRateJumps require a continuous problem, like an ODE/SDE/DDE/DAE problem.") + error("General `VariableRateJump`s require a continuous problem, like an ODE/SDE/DDE/DAE problem. To use a `DiscreteProblem` bounded `VariableRateJump`s must be used. See the JumpProcesses docs.") end function extend_problem(prob::DiffEqBase.AbstractODEProblem, jumps; rng = DEFAULT_RNG) @@ -232,19 +257,19 @@ function extend_problem(prob::DiffEqBase.AbstractODEProblem, jumps; rng = DEFAUL jump_f = let _f = _f function jump_f(du::ExtendedJumpArray, u::ExtendedJumpArray, p, t) _f(du.u, u.u, p, t) - update_jumps!(du, u, p, t, length(u.u), jumps.variable_jumps...) + update_jumps!(du, u, p, t, length(u.u), jumps...) end end ttype = eltype(prob.tspan) u0 = ExtendedJumpArray(prob.u0, - [-randexp(rng, ttype) for i in 1:length(jumps.variable_jumps)]) + [-randexp(rng, ttype) for i in 1:length(jumps)]) remake(prob, f = ODEFunction{true}(jump_f), u0 = u0) end function extend_problem(prob::DiffEqBase.AbstractSDEProblem, jumps; rng = DEFAULT_RNG) function jump_f(du, u, p, t) prob.f(du.u, u.u, p, t) - update_jumps!(du, u, p, t, length(u.u), jumps.variable_jumps...) + update_jumps!(du, u, p, t, length(u.u), jumps...) end if prob.noise_rate_prototype === nothing @@ -259,30 +284,30 @@ function extend_problem(prob::DiffEqBase.AbstractSDEProblem, jumps; rng = DEFAUL ttype = eltype(prob.tspan) u0 = ExtendedJumpArray(prob.u0, - [-randexp(rng, ttype) for i in 1:length(jumps.variable_jumps)]) + [-randexp(rng, ttype) for i in 1:length(jumps)]) remake(prob, f = SDEFunction{true}(jump_f, jump_g), g = jump_g, u0 = u0) end function extend_problem(prob::DiffEqBase.AbstractDDEProblem, jumps; rng = DEFAULT_RNG) jump_f = function (du, u, h, p, t) prob.f(du.u, u.u, h, p, t) - update_jumps!(du, u, p, t, length(u.u), jumps.variable_jumps...) + update_jumps!(du, u, p, t, length(u.u), jumps...) end ttype = eltype(prob.tspan) u0 = ExtendedJumpArray(prob.u0, - [-randexp(rng, ttype) for i in 1:length(jumps.variable_jumps)]) - ramake(prob, f = DDEFunction{true}(jump_f), u0 = u0) + [-randexp(rng, ttype) for i in 1:length(jumps)]) + remake(prob, f = DDEFunction{true}(jump_f), u0 = u0) end # Not sure if the DAE one is correct: Should be a residual of sorts function extend_problem(prob::DiffEqBase.AbstractDAEProblem, jumps; rng = DEFAULT_RNG) jump_f = function (out, du, u, p, t) prob.f(out.u, du.u, u.u, t) - update_jumps!(du, u, t, length(u.u), jumps.variable_jumps...) + update_jumps!(du, u, t, length(u.u), jumps...) end ttype = eltype(prob.tspan) u0 = ExtendedJumpArray(prob.u0, - [-randexp(rng, ttype) for i in 1:length(jumps.variable_jumps)]) + [-randexp(rng, ttype) for i in 1:length(jumps)]) remake(prob, f = DAEFunction{true}(jump_f), u0 = u0) end @@ -324,7 +349,7 @@ function build_variable_callback(cb, idx, jump; rng = DEFAULT_RNG) CallbackSet(cb, new_cb) end -aggregator(jp::JumpProblem{P, A, C, J, J2}) where {P, A, C, J, J2} = A +aggregator(jp::JumpProblem{iip, P, A, C, J}) where {iip, P, A, C, J} = A @inline function extend_tstops!(tstops, jp::JumpProblem{P, A, C, J, J2}) where {P, A, C, J, J2} @@ -358,10 +383,10 @@ end function Base.show(io::IO, mime::MIME"text/plain", A::JumpProblem) summary(io, A) println(io) - println(io, "Number of constant rate jumps: ", + println(io, "Number of jumps with discrete aggregation: ", A.discrete_jump_aggregation === nothing ? 0 : num_constant_rate_jumps(A.discrete_jump_aggregation)) - println(io, "Number of variable rate jumps: ", length(A.variable_jumps)) + println(io, "Number of jumps with continuous aggregation: ", length(A.variable_jumps)) nmajs = (A.massaction_jump !== nothing) ? get_num_majumps(A.massaction_jump) : 0 println(io, "Number of mass action jumps: ", nmajs) if A.regular_jump !== nothing diff --git a/test/bimolerx_test.jl b/test/bimolerx_test.jl index 5673fcf2..e91c04ea 100644 --- a/test/bimolerx_test.jl +++ b/test/bimolerx_test.jl @@ -15,7 +15,7 @@ doprintmeans = false # SSAs to test SSAalgs = (RDirect(), RSSACR(), Direct(), DirectFW(), FRM(), FRMFW(), SortingDirect(), - NRM(), RSSA(), DirectCR()) + NRM(), RSSA(), DirectCR(), Coevolve()) Nsims = 32000 tf = 0.01 diff --git a/test/degenerate_rx_cases.jl b/test/degenerate_rx_cases.jl index 4fede6f1..b81bb2b3 100644 --- a/test/degenerate_rx_cases.jl +++ b/test/degenerate_rx_cases.jl @@ -13,7 +13,7 @@ doprint = false doplot = false methods = (RDirect(), RSSACR(), Direct(), DirectFW(), FRM(), FRMFW(), SortingDirect(), - NRM(), RSSA(), DirectCR()) + NRM(), RSSA(), DirectCR(), Coevolve()) # one reaction case, mass action jump, vector of data rate = [2.0] diff --git a/test/geneexpr_test.jl b/test/geneexpr_test.jl index 88b5dfda..24e1e414 100644 --- a/test/geneexpr_test.jl +++ b/test/geneexpr_test.jl @@ -13,7 +13,7 @@ doprintmeans = false # SSAs to test SSAalgs = (RDirect(), RSSACR(), Direct(), DirectFW(), FRM(), FRMFW(), SortingDirect(), - NRM(), RSSA(), DirectCR()) + NRM(), RSSA(), DirectCR(), Coevolve()) # numerical parameters Nsims = 8000 diff --git a/test/hawkes_test.jl b/test/hawkes_test.jl new file mode 100644 index 00000000..0de428e3 --- /dev/null +++ b/test/hawkes_test.jl @@ -0,0 +1,167 @@ +using JumpProcesses, OrdinaryDiffEq, Statistics +using Test +using StableRNGs +rng = StableRNG(12345) + +function reset_history!(h; start_time = nothing) + @inbounds for i in 1:length(h) + h[i] = eltype(h)[] + end + nothing +end + +function empirical_rate(sol) + return (sol(sol.t[end]) - sol(sol.t[1])) / (sol.t[end] - sol.t[1]) +end + +function hawkes_rate(i::Int, g, h) + function rate(u, p, t) + λ, α, β = p + x = zero(typeof(t)) + for j in g[i] + for _t in reverse(h[j]) + λij = α * exp(-β * (t - _t)) + if λij ≈ 0 + break + end + x += λij + end + end + return λ + x + end + return rate +end + +function hawkes_jump(i::Int, g, h; uselrate = true) + rate = hawkes_rate(i, g, h) + urate = rate + if uselrate + lrate(u, p, t) = p[1] + rateinterval = (u, p, t) -> begin + _lrate = lrate(u, p, t) + _urate = urate(u, p, t) + return _urate == _lrate ? typemax(t) : 1 / (2 * _urate) + end + else + lrate = nothing + rateinterval = (u, p, t) -> begin + _urate = urate(u, p, t) + return 1 / (2 * _urate) + end + end + function affect!(integrator) + push!(h[i], integrator.t) + integrator.u[i] += 1 + end + return VariableRateJump(rate, affect!; lrate, urate, rateinterval) +end + +function hawkes_jump(u, g, h; uselrate = true) + return [hawkes_jump(i, g, h; uselrate) for i in 1:length(u)] +end + +function hawkes_problem(p, agg::Coevolve; u = [0.0], tspan = (0.0, 50.0), + save_positions = (false, true), + g = [[1]], h = [[]], uselrate = true) + dprob = DiscreteProblem(u, tspan, p) + jumps = hawkes_jump(u, g, h; uselrate) + jprob = JumpProblem(dprob, agg, jumps...; dep_graph = g, save_positions, rng) + return jprob +end + +function f!(du, u, p, t) + du .= 0 + nothing +end + +function hawkes_problem(p, agg; u = [0.0], tspan = (0.0, 50.0), + save_positions = (false, true), + g = [[1]], h = [[]], kwargs...) + oprob = ODEProblem(f!, u, tspan, p) + jumps = hawkes_jump(u, g, h) + jprob = JumpProblem(oprob, agg, jumps...; save_positions, rng) + return jprob +end + +function expected_stats_hawkes_problem(p, tspan) + T = tspan[end] - tspan[1] + λ, α, β = p + γ = β - α + κ = β / γ + Eλ = λ * κ + # Equation 21 + # J. Da Fonseca and R. Zaatour, + # “Hawkes Process: Fast Calibration, Application to Trade Clustering and Diffusive Limit.” + # Rochester, NY, Aug. 04, 2013. doi: 10.2139/ssrn.2294112. + Varλ = (Eλ * (T * κ^2 + (1 - κ^2) * (1 - exp(-T * γ)) / γ)) / (T^2) + return Eλ, Varλ +end + +u0 = [0.0] +p = (0.5, 0.5, 2.0) +tspan = (0.0, 200.0) +g = [[1]] +h = [Float64[]] + +Eλ, Varλ = expected_stats_hawkes_problem(p, tspan) + +algs = (Direct(), Coevolve(), Coevolve()) +uselrate = zeros(Bool, length(algs)) +uselrate[3] = true +Nsims = 250 + +for (i, alg) in enumerate(algs) + jump_prob = hawkes_problem(p, alg; u = u0, tspan, g, h, uselrate = uselrate[i]) + if typeof(alg) <: Coevolve + stepper = SSAStepper() + else + stepper = Tsit5() + end + sols = Vector{ODESolution}(undef, Nsims) + for n in 1:Nsims + reset_history!(h) + sols[n] = solve(jump_prob, stepper) + end + if typeof(alg) <: Coevolve + λs = permutedims(mapreduce((sol) -> empirical_rate(sol), hcat, sols)) + else + cols = length(sols[1].u[1].u) + λs = permutedims(mapreduce((sol) -> empirical_rate(sol), hcat, sols))[:, 1:cols] + end + @test isapprox(mean(λs), Eλ; atol = 0.01) + @test isapprox(var(λs), Varλ; atol = 0.001) +end + +# test stepping Coevolve with continuous integrator and bounded jumps +let + oprob = ODEProblem(f!, u0, tspan, p) + jumps = hawkes_jump(u0, g, h) + jprob = JumpProblem(oprob, Coevolve(), jumps...; dep_graph = g, rng) + @test ((jprob.variable_jumps === nothing) || isempty(jprob.variable_jumps)) + sols = Vector{ODESolution}(undef, Nsims) + for n in 1:Nsims + reset_history!(h) + sols[n] = solve(jprob, Tsit5()) + end + λs = permutedims(mapreduce((sol) -> empirical_rate(sol), hcat, sols)) + @test isapprox(mean(λs), Eλ; atol = 0.01) + @test isapprox(var(λs), Varλ; atol = 0.001) +end + +# test disabling bounded jumps and using continuous integrator +let + oprob = ODEProblem(f!, u0, tspan, p) + jumps = hawkes_jump(u0, g, h) + jprob = JumpProblem(oprob, Coevolve(), jumps...; dep_graph = g, rng, + use_vrj_bounds = false) + @test length(jprob.variable_jumps) == 1 + sols = Vector{ODESolution}(undef, Nsims) + for n in 1:Nsims + reset_history!(h) + sols[n] = solve(jprob, Tsit5()) + end + cols = length(sols[1].u[1].u) + λs = permutedims(mapreduce((sol) -> empirical_rate(sol), hcat, sols))[:, 1:cols] + @test isapprox(mean(λs), Eλ; atol = 0.01) + @test isapprox(var(λs), Varλ; atol = 0.001) +end diff --git a/test/linearreaction_test.jl b/test/linearreaction_test.jl index 787add34..d169b571 100644 --- a/test/linearreaction_test.jl +++ b/test/linearreaction_test.jl @@ -309,7 +309,7 @@ for method in SSAalgs end # for dependency graph methods just test with mass action jumps -SSAalgs = [RDirect(), NRM(), SortingDirect(), DirectCR()] +SSAalgs = [RDirect(), NRM(), SortingDirect(), DirectCR(), Coevolve()] jump_prob_gens = [A_to_B_ma] for method in SSAalgs for jump_prob_gen in jump_prob_gens diff --git a/test/runtests.jl b/test/runtests.jl index 1c89f66f..1b9a7b21 100644 --- a/test/runtests.jl +++ b/test/runtests.jl @@ -21,6 +21,7 @@ using JumpProcesses, DiffEqBase, SafeTestsets @time @safetestset "A + B <--> C" begin include("reversible_binding.jl") end @time @safetestset "Remake tests" begin include("remake_test.jl") end @time @safetestset "Long time accuracy test" begin include("longtimes_test.jl") end + @time @safetestset "Hawkes process" begin include("hawkes_test.jl") end @time @safetestset "Reaction rates" begin include("spatial/reaction_rates.jl") end @time @safetestset "Hop rates" begin include("spatial/hop_rates.jl") end @time @safetestset "Topology" begin include("spatial/topology.jl") end