Skip to content

Commit

Permalink
Merge pull request #281 from ArnoStrouwen/LT
Browse files Browse the repository at this point in the history
[skip ci] LanguageTool
  • Loading branch information
ChrisRackauckas authored Jan 5, 2023
2 parents 07a66bb + eca504c commit 2afd123
Show file tree
Hide file tree
Showing 8 changed files with 124 additions and 120 deletions.
16 changes: 8 additions & 8 deletions docs/src/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jump_prob = JumpProblem(prob, Direct(), jset)
sol = solve(jump_prob, SSAStepper())
```

If you have many jumps in tuples or vectors it is easiest to use the keyword
If you have many jumps in tuples or vectors, it is easiest to use the keyword
argument-based constructor:
```julia
cj1 = ConstantRateJump(rate1, affect1!)
Expand Down Expand Up @@ -65,7 +65,7 @@ jprob = JumpProblem(dprob, Direct(), maj,
uses the `Xoroshiro128Star` generator from
[RandomNumbers.jl](https://github.com/JuliaRandom/RandomNumbers.jl).

On version 1.7 and up JumpProcesses uses Julia's builtin random number generator by
On version 1.7 and up, JumpProcesses uses Julia's builtin random number generator by
default. On versions below 1.7 it uses `Xoroshiro128Star`.

## What are these aggregators and aggregations in JumpProcesses?
Expand All @@ -76,14 +76,14 @@ jump type happens at that time. These methods are examples of stochastic
simulation algorithms (SSAs), also known as Gillespie methods, Doob's method, or
Kinetic Monte Carlo methods. These are all names for jump (or point) processes
simulation methods used across the biology, chemistry, engineering, mathematics,
and physics literature. In the JumpProcesses terminology we call such methods
and physics literature. In the JumpProcesses terminology, we call such methods
"aggregators", and the cache structures that hold their basic data
"aggregations". See [Jump Aggregators for Exact Simulation](@ref) for a list of
the available SSA aggregators.

## How should jumps be ordered in dependency graphs?
Internally, JumpProcesses SSAs (aggregators) order all `MassActionJump`s first,
then all `ConstantRateJumps` and/or `VariableRateJumps`. i.e. in the example
then all `ConstantRateJumps` and/or `VariableRateJumps`. i.e., in the example

```julia
using JumpProcesses
Expand Down Expand Up @@ -115,12 +115,12 @@ more on dependency graphs needed for the various SSAs.
Callbacks can be used with `ConstantRateJump`s, `MassActionJump`s, and
`VariableRateJump`s. When solving a pure jump system with `SSAStepper`, only
discrete callbacks can be used (otherwise a different time stepper is needed).
When using an ODE or SDE time stepper any callback should work.
When using an ODE or SDE time stepper, any callback should work.

*Note, when modifying `u` or `p` within a callback, you must call
[`reset_aggregated_jumps!`](@ref) after making updates.* This ensures that the
underlying jump simulation algorithms know to reinitialize their internal data
structures. Leaving out this call will lead to incorrect behavior!
structures. Omitting this call will lead to incorrect behavior!

A simple example that uses a `MassActionJump` and changes the parameters at a
specified time in the simulation using a `DiscreteCallback` is
Expand Down Expand Up @@ -151,10 +151,10 @@ of `u[1]`, giving

## How can I access earlier solution values in callbacks?
When using an ODE or SDE time-stepper that conforms to the [integrator
interface](https://docs.sciml.ai/DiffEqDocs/stable/basics/integrator/) one
interface](https://docs.sciml.ai/DiffEqDocs/stable/basics/integrator/), one
can simply use `integrator.uprev`. For efficiency reasons, the pure jump
[`SSAStepper`](@ref) integrator does not have such a field. If one needs
solution components at earlier times one can save them within the callback
solution components at earlier times, one can save them within the callback
condition by making a functor:
```julia
# stores the previous value of u[2] and represents the callback functions
Expand Down
10 changes: 5 additions & 5 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# JumpProcesses.jl: Stochastic Simulation Algorithms for Jump Processes, Jump-ODEs, and Jump-Diffusions
JumpProcesses.jl, formerly DiffEqJump.jl, provides methods for simulating jump
(or point) processes. Across different fields of science such methods are also
(or point) processes. Across different fields of science, such methods are also
known as stochastic simulation algorithms (SSAs), Doob's method, Gillespie
methods, or Kinetic Monte Carlo methods . It also enables the incorporation of
methods, or Kinetic Monte Carlo methods. It also enables the incorporation of
jump processes into hybrid jump-ODE and jump-SDE models, including jump
diffusions.

Expand All @@ -12,7 +12,7 @@ and one of the core solver libraries included in

The documentation includes
- [a tutorial on simulating basic Poisson processes](@ref poisson_proc_tutorial)
- [a tutorial and details on using JumpProcesses to simulate jump processes via SSAs (i.e. Gillespie methods)](@ref ssa_tutorial),
- [a tutorial and details on using JumpProcesses to simulate jump processes via SSAs (i.e., Gillespie methods)](@ref ssa_tutorial),
- [a tutorial on simulating jump-diffusion processes](@ref jump_diffusion_tutorial),
- [a reference on the types of jumps and available simulation methods](@ref jump_problem_type),
- [a reference on jump time stepping methods](@ref jump_solve)
Expand All @@ -24,14 +24,14 @@ There are two ways to install `JumpProcesses.jl`. First, users may install the m
`DifferentialEquations.jl` package, which installs and wraps `OrdinaryDiffEq.jl`
for solving ODEs, `StochasticDiffEq.jl` for solving SDEs, and `JumpProcesses.jl`,
along with a number of other useful packages for solving models involving ODEs,
SDEs and/or jump process. This single install will provide the user with all of
SDEs and/or jump process. This single install will provide the user with all
the facilities for developing and solving Jump problems.

To install the `DifferentialEquations.jl` package, refer to the following link
for complete [installation
details](https://docs.sciml.ai/DiffEqDocs/stable).

If the user wishes to separately install the `JumpProcesses.jl` library, which is a
If the user wishes to install the `JumpProcesses.jl` library separately, which is a
lighter dependency than `DifferentialEquations.jl`, then the following code will
install `JumpProcesses.jl` using the Julia package manager:
```julia
Expand Down
6 changes: 3 additions & 3 deletions docs/src/jump_solve.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@ use with exact simulation methods can be defined as `ConstantRateJump`s,
τ-leaping methods should be defined as `RegularJump`s.

There are special algorithms available for efficiently simulating an exact, pure
`JumpProblem` (i.e. a `JumpProblem` over a `DiscreteProblem`). `SSAStepper()`
`JumpProblem` (i.e., a `JumpProblem` over a `DiscreteProblem`). `SSAStepper()`
is an efficient streamlined integrator for time stepping such problems from
individual jump to jump. This integrator is named after Stochastic Simulation
Algorithms (SSAs), commonly used naming in chemistry and biology applications
for the class of exact jump process simulation algorithms. In turn, we denote by
"aggregators" the algorithms that `SSAStepper` calls to calculate the next jump
time and to execute a jump (i.e. change the system state appropriately). All
time and to execute a jump (i.e., change the system state appropriately). All
JumpProcesses aggregators can be used with `ConstantRateJump`s and
`MassActionJump`s, with a subset of aggregators also working with bounded
`VariableRateJump`s (see [the first tutorial](@ref poisson_proc_tutorial) for
Expand All @@ -35,7 +35,7 @@ performant `FunctionMap` time-stepper can be used.

If there is a `RegularJump`, then inexact τ-leaping methods must be used. The
current recommended method is `TauLeaping` if one needs adaptivity, events, etc.
If ones only needs the most barebones fixed time-step leaping method, then
If one only needs the most barebones fixed time-step leaping method, then
`SimpleTauLeaping` can have performance benefits.

## Special Methods for Pure Jump Problems
Expand Down
34 changes: 17 additions & 17 deletions docs/src/jump_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,19 +39,19 @@ jump events occur. These jumps can be specified as a [`ConstantRateJump`](@ref),
[`MassActionJump`](@ref), or a [`VariableRateJump`](@ref).

Each individual type of jump that can occur is represented through (implicitly
or explicitly) specifying two pieces of information; a `rate` function (i.e.
or explicitly) specifying two pieces of information; a `rate` function (i.e.,
intensity or propensity) for the jump and an `affect!` function for the jump.
The former gives the probability per time a particular jump can occur given the
current state of the system, and hence determines the time at which jumps can
happen. The later specifies the instantaneous change in the state of the system
happen. The latter specifies the instantaneous change in the state of the system
when the jump occurs.

A specific jump type is a [`VariableRateJump`](@ref) if its rate function is
dependent on values which may change between the occurrence of any two jump
events of the process. Examples include jumps where the rate is an explicit
function of time, or depends on a state variable that is modified via continuous
dynamics such as an ODE or SDE. Such "general" `VariableRateJump`s can be
expensive to simulate because it is necessary to take into account the (possibly
expensive to simulate because it is necessary to consider the (possibly
continuous) changes in the rate function when calculating the next jump time.

*Bounded* [`VariableRateJump`](@ref)s represent a special subset of
Expand Down Expand Up @@ -84,12 +84,12 @@ discrete steps through time, over which they simultaneously execute many jumps.
These methods can be much faster as they do not need to simulate the realization
of every individual jump event. τ-leaping methods trade accuracy for speed, and
are best used when a set of jumps do not make significant changes to the
processes' state and/or rates over the course of one time-step (i.e. during a
processes' state and/or rates over the course of one time-step (i.e., during a
leap interval). A single [`RegularJump`](@ref) is used to encode jumps for
τ-leaping algorithms. While τ-leaping methods can be proven to converge in the
limit that the time-step approaches zero, their accuracy can be highly dependent
on the chosen time-step. As a rule of thumb, if changes to the state variable
`u` during a time-step (i.e. leap interval) are "minimal" compared to size of
`u` during a time-step (i.e., leap interval) are "minimal" compared to the size of
the system, an τ-leaping method can often provide reasonable solution
approximations.

Expand Down Expand Up @@ -145,7 +145,7 @@ MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs=nothi
``3A \overset{k}{\rightarrow} B`` the rate function would be
`k*A*(A-1)*(A-2)/3!`. To *avoid* having the reaction rates rescaled (by `1/2`
and `1/6` for these two examples), one can pass the `MassActionJump`
constructor the optional named parameter `scale_rates = false`, i.e. use
constructor the optional named parameter `scale_rates = false`, i.e., use
```julia
MassActionJump(reactant_stoich, net_stoich; scale_rates = false, param_idxs)
```
Expand All @@ -158,7 +158,7 @@ MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs=nothi
net_stoich = [[1 => 1]]
jump = MassActionJump(reactant_stoich, net_stoich; param_idxs=[1])
```
Alternatively one can create an empty vector of pairs to represent the reaction:
Alternatively, one can create an empty vector of pairs to represent the reaction:
```julia
p = [1.]
reactant_stoich = [Vector{Pair{Int,Int}}()]
Expand Down Expand Up @@ -222,7 +222,7 @@ Note that
- It is currently only possible to simulate `VariableRateJump`s with
`SSAStepper` when using systems with only bounded `VariableRateJump`s and the
`Coevolve` aggregator.
- When choosing a different aggregator than `Coevolve`, `SSAStepper` can not
- When choosing a different aggregator than `Coevolve`, `SSAStepper` cannot
currently be used, and the `JumpProblem` must be coupled to a continuous
problem type such as an `ODEProblem` to handle time-stepping. The continuous
time-stepper treats *all* `VariableRateJump`s as `ContinuousCallback`s, using
Expand All @@ -242,7 +242,7 @@ RegularJump(rate, c, numjumps; mark_dist = nothing)
jump process
- `c(du, u, p, t, counts, mark)` calculates the update given `counts` number of
jumps for each jump process in the interval.
- `numjumps` is the number of jump processes, i.e. the number of `rate`
- `numjumps` is the number of jump processes, i.e., the number of `rate`
equations and the number of `counts`.
- `mark_dist` is the distribution for a mark.

Expand Down Expand Up @@ -300,24 +300,24 @@ aggregator requires various types of dependency graphs, see the next section):
aggregator uses a different internal storage format for collections of
`ConstantRateJumps`.
- *`DirectCR`*: The Composition-Rejection Direct method of Slepoy et al [2]. For
large networks and linear chain-type networks it will often give better
large networks and linear chain-type networks, it will often give better
performance than `Direct`.
- *`SortingDirect`*: The Sorting Direct Method of McCollum et al [3]. It will
usually offer performance as good as `Direct`, and for some systems can offer
substantially better performance.
- *`RSSA`*: The Rejection SSA (RSSA) method of Thanh et al [4,5]. With `RSSACR`,
for very large reaction networks it often offers the best performance of all
for very large reaction networks, it often offers the best performance of all
methods.
- *`RSSACR`*: The Rejection SSA (RSSA) with Composition-Rejection method of
Thanh et al [6]. With `RSSA`, for very large reaction networks it often offers
Thanh et al [6]. With `RSSA`, for very large reaction networks, it often offers
the best performance of all methods.
- `RDirect`: A variant of Gillespie's Direct method [1] that uses rejection to
sample the next reaction.
- `FRM`: The Gillespie first reaction method SSA [1]. `Direct` should generally
offer better performance and be preferred to `FRM`.
- `FRMFW`: The Gillespie first reaction method SSA [1] with `FunctionWrappers`.
- *`NRM`*: The Gibson-Bruck Next Reaction Method [7]. For some reaction network
structures this may offer better performance than `Direct` (for example,
structures, this may offer better performance than `Direct` (for example,
large, linear chains of reactions).
- *`Coevolve`*: An adaptation of the COEVOLVE algorithm of Farajtabar et al [8].
Currently the only aggregator that also supports *bounded*
Expand Down Expand Up @@ -372,7 +372,7 @@ evolution, Journal of Machine Learning Research 18(1), 1305–1353 (2017). doi:
Italicized constant rate jump aggregators above require the user to pass a
dependency graph to `JumpProblem`. `Coevolve`, `DirectCR`, `NRM`, and
`SortingDirect` require a jump-jump dependency graph, passed through the named
parameter `dep_graph`. i.e.
parameter `dep_graph`. i.e.,
```julia
JumpProblem(prob, DirectCR(), jump1, jump2; dep_graph = your_dependency_graph)
```
Expand All @@ -388,7 +388,7 @@ when the `i`th jump occurs. Internally, all `MassActionJump`s are ordered before
`ConstantRateJump`s and bounded `VariableRateJump`s. General `VariableRateJump`s
are not handled by aggregators, and so not included in the jump ordering for
dependency graphs. Note that the relative order between `ConstantRateJump`s and
relative order between bounded `VariableRateJump`s is preserved. In this way one
relative order between bounded `VariableRateJump`s is preserved. In this way, one
can precalculate the jump order to manually construct dependency graphs.

`RSSA` and `RSSACR` require two different types of dependency graphs, passed
Expand All @@ -401,7 +401,7 @@ through the following `JumpProblem` kwargs:
value, `u[i]`, altered when the jump occurs.

For systems generated from a [Catalyst](https://docs.sciml.ai/Catalyst/stable/)
`reaction_network` these will be auto-generated. Otherwise you must explicitly
`reaction_network` these will be auto-generated. Otherwise, you must explicitly
construct and pass in these mappings.

## Recommendations for exact methods
Expand Down Expand Up @@ -430,7 +430,7 @@ For systems with only `ConstantRateJump`s and `MassActionJump`s,
often substantially outperform the other methods.

For pure jump systems, time-step using `SSAStepper()` with a `DiscreteProblem`
unless one has general (i.e. non-bounded) `VariableRateJump`s.
unless one has general (i.e., non-bounded) `VariableRateJump`s.

In general, for systems with sparse dependency graphs if `Direct` is slow, one
of `SortingDirect`, `RSSA` or `RSSACR` will usually offer substantially better
Expand Down
Loading

0 comments on commit 2afd123

Please sign in to comment.