diff --git a/README.md b/README.md index db2f976..6198dff 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ [![codecov](https://codecov.io/gh/JuliaMath/Roots.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaMath/Roots.jl) This package contains simple routines for finding roots, or zeros, of -scalar functions of a single real variable. The `find_zero` function +scalar functions of a single real variable using floating-point math. The `find_zero` function provides the primary interface. The basic call is `find_zero(f, x0, [M], [p]; kws...)` where, typically, `f` is a function, `x0` a starting point or bracketing interval, `M` is used to adjust the default algorithms used, and `p` can be used to pass in parameters. @@ -34,13 +34,13 @@ The various algorithms include: `Roots.Order2B` are superlinear and quadratically converging methods independent of the multiplicity of the zero. -* There are historic algorithms that require a derivative or two to be specified: - `Roots.Newton` and `Roots.Halley`. `Roots.Schroder` provides a - quadratic method, like Newton's method, which is independent of the - multiplicity of the zero. +* There are historic algorithms that require a derivative or two to be + specified: `Roots.Newton` and `Roots.Halley`. `Roots.Schroder` + provides a quadratic method, like Newton's method, which is + independent of the multiplicity of the zero. This is generalized by + `Roots.ThukralXB` (with `X` being 2,3,4, or 5). * There are several non-exported algorithms, such as, `Roots.Brent()`, - `FalsePosition`, `Roots.A42`, `Roots.AlefeldPotraShi`, `Roots.LithBoonkkampIJzermanBracket`, and `Roots.LithBoonkkampIJzerman`. @@ -53,21 +53,21 @@ julia> using Roots julia> f(x) = exp(x) - x^4; -julia> α₀,α₁,α₂ = -0.8155534188089607, 1.4296118247255556, 8.6131694564414; +julia> α₀, α₁, α₂ = -0.8155534188089607, 1.4296118247255556, 8.6131694564414; julia> find_zero(f, (8,9), Bisection()) ≈ α₂ # a bisection method has the bracket specified true -julia> find_zero(f, (-10, 0)) ≈ α₀ # Bisection is default if x in `find_zero(f,x)` is not a number +julia> find_zero(f, (-10, 0)) ≈ α₀ # Bisection is default if x in `find_zero(f, x)` is not scalar true -julia> find_zero(f, (-10, 0), Roots.A42()) ≈ α₀ # fewer function evaluations +julia> find_zero(f, (-10, 0), Roots.A42()) ≈ α₀ # fewer function evaluations than Bisection true ``` For non-bracketing methods, the initial position is passed in as a -scalar, or, possibly, for secant-like methods an iterable of ``(x_0, x_1)``: +scalar, or, possibly, for secant-like methods an iterable like `(x_0, x_1)`: ```julia julia> find_zero(f, 3) ≈ α₁ # find_zero(f, x0::Number) will use Order0() @@ -161,8 +161,8 @@ true The [DifferentialEquations](https://github.com/SciML/DifferentialEquations.jl) interface of setting up a problem; initializing the problem; then -solving the problem is also implemented using the methods -`ZeroProblem`, `init`, `solve!` and `solve` (from [CommonSolve](https://github.com/SciML/CommonSolve.jl)). +solving the problem is also implemented using the types +`ZeroProblem` and the methods `init`, `solve!`, and `solve` (from [CommonSolve](https://github.com/SciML/CommonSolve.jl)). For example, we can solve a problem with many different methods, as follows: @@ -261,7 +261,7 @@ true ``` The interval can also be specified using a structure with `extrema` -defined, where `extrema` return two different values: +defined, where `extrema` returns two different values: ```julia julia> using IntervalSets diff --git a/docs/src/index.md b/docs/src/index.md index 998f417..1a07e55 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -6,7 +6,8 @@ Documentation for [Roots.jl](https://github.com/JuliaMath/Roots.jl) ## About `Roots` is a `Julia` package for finding zeros of continuous -scalar functions of a single real variable. That is solving ``f(x)=0`` for ``x``. +scalar functions of a single real variable using floating point numbers. That is solving ``f(x)=0`` for ``x`` adjusting for floating-point idiosyncracies. + The `find_zero` function provides the primary interface. It supports various algorithms through the specification of a method. These include: @@ -17,11 +18,11 @@ specification of a method. These include: most floating point number types, bisection occurs in a manner exploiting floating point storage conventions leading to an exact zero or a bracketing interval as small as floating point - computations allows. Other methods include `Roots.A42`, - `Roots.AlefeldPotraShi`, `Roots.Brent`, `Roots.Chandrapatlu`, + computations allows. Other methods include `A42`, + `AlefeldPotraShi`, `Roots.Brent`, `Roots.Chandrapatlu`, `Roots.ITP`, `Roots.Ridders`, and ``12``-flavors of - `FalsePosition`. The default bracketing method is `Bisection` for - the basic floating-point types, as it is more robust to some inputs, + `FalsePosition`. The default bracketing method for + the basic floating-point types is `Bisection` , as it is more robust to some inputs, but `A42` and `AlefeldPotraShi` typically converge in a few iterations and are more performant. @@ -45,7 +46,7 @@ specification of a method. These include: `Roots.Schroder` provides a quadratic method, like Newton's method, which is independent of the multiplicity of the zero. The `Roots.ThukralXB`, `X=2`, `3`, `4`, or `5` are also multiplicity - three. The `X` denotes the number of derivatives that need + free. The `X` denotes the number of derivatives that need specifying. The `Roots.LithBoonkkampIJzerman{S,D}` methods remember `S` steps and use `D` derivatives. @@ -53,7 +54,7 @@ specification of a method. These include: ## Basic usage -Consider the polynomial function ``f(x) = x^5 - x + 1/2``. As a polynomial, its roots, or zeros, could be identified with the `roots` function of the `Polynomials` package. However, even that function uses a numeric method to identify the values, as no solution with radicals is available. That is, even for polynomials, non-linear root finders are needed to solve ``f(x)=0``. +Consider the polynomial function ``f(x) = x^5 - x + 1/2``. As a polynomial, its roots, or zeros, could be identified with the `roots` function of the `Polynomials` package. However, even that function uses a numeric method to identify the values, as no solution with radicals is available. That is, even for polynomials, non-linear root finders are needed to solve ``f(x)=0``. (Though polynomial root-finders can exploit certain properties not available for general non-linear functions.) The `Roots` package provides a variety of algorithms for this task. In this overview, only the default ones are illustrated. @@ -77,7 +78,7 @@ true The default algorithm is guaranteed to have an answer nearly as accurate as is possible given the limitations of floating point computations. -For the zeros "near" a point, a non-bracketing method is often used, as generally the algorithms are more efficient and can be used in cases where a zero does not. Passing just the initial point will dispatch to such a method: +For the zeros "near" a point, a non-bracketing method is often used, as generally the algorithms are more efficient and can be used in cases where a zero does not cross the ``x`` axis. Passing just the initial point will dispatch to such a method: ```jldoctest find_zero julia> find_zero(f, 0.6) ≈ 0.550606579334135 @@ -85,7 +86,9 @@ true ``` -This finds the answer to the left of the starting point. To get the other nearby zero, a starting point closer to the answer can be used. However, an initial graph might convince one that any of the up-to-``5`` real roots will occur between ``-5`` and ``5``. The `find_zeros` function uses heuristics and a few of the algorithms to identify all zeros between the specified range. Here we see there are ``3``: +This finds the answer to the left of the starting point. To get the other nearby zero, a starting point closer to the answer can be used. + +However, an initial graph might convince one that any of the up-to-``5`` real roots will occur between ``-5`` and ``5``. The `find_zeros` function uses heuristics and a few of the algorithms to identify all zeros between the specified range. Here the method successfully identifies all ``3``: ```jldoctest find_zero julia> find_zeros(f, -5, 5) diff --git a/docs/src/reference.md b/docs/src/reference.md index cbd580f..593bb15 100644 --- a/docs/src/reference.md +++ b/docs/src/reference.md @@ -253,7 +253,7 @@ tolerances (`rtol` and `atol`). The size of ``f'(\alpha)`` is problem dependent, and can be accommodated by larger relative or absolute tolerances. -When an algorithm returns a `NaN` value, it terminates. This can happen near convergence or may indicate some issues. Early termination is checked for convergence in the size of ``f(x_n)`` with a relaxed tolerance when `strict=false` is specified (the default). +When an algorithm returns an `NaN` value, it terminates. This can happen near convergence or may indicate some issues. Early termination is checked for convergence in the size of ``f(x_n)`` with a relaxed tolerance when `strict=false` is specified (the default). !!! note "Relative tolerances and assessing `f(x) ≈ 0`" The use of relative tolerances to check if ``f(x) \approx 0`` can lead to spurious answers where ``x`` is very large (and hence the relative tolerance is large). The return of very large solutions should be checked against expectations of the answer. @@ -302,7 +302,7 @@ Roots.fzero ## Tracking iterations -It is possible to add the keyword arguement `verbose=true` to when calling the `find_zero` function to get detailed information about the solution, and data from each iteration. If you want to save this data instead of just printing it, you can use a `Tracks` object. +It is possible to add the keyword argument `verbose=true` when calling the `find_zero` function to get detailed information about the solution and data from each iteration. To save this data a `Tracks` object may be passed in to `tracks`. ---- diff --git a/src/Bracketing/alefeld_potra_shi.jl b/src/Bracketing/alefeld_potra_shi.jl index c685fd4..4b80ca7 100644 --- a/src/Bracketing/alefeld_potra_shi.jl +++ b/src/Bracketing/alefeld_potra_shi.jl @@ -76,7 +76,7 @@ EQUATIONS", by Alefeld, Potra, Shi; DOI: [10.1090/S0025-5718-1993-1192965-2](https://doi.org/10.1090/S0025-5718-1993-1192965-2). The order of convergence is `2 + √5`; asymptotically there are 3 function evaluations per step. -Asymptotic efficiency index is ``(2+√5)^(1/3) ≈ 1.618...``. Less efficient, but can run faster than the [`A42`](@ref) method. +Asymptotic efficiency index is ``(2+√5)^{1/3} ≈ 1.618...``. Less efficient, but can run faster than the [`A42`](@ref) method. Originally by John Travers. """ diff --git a/src/Bracketing/chandrapatlu.jl b/src/Bracketing/chandrapatlu.jl index 5ec54f7..6f0734f 100644 --- a/src/Bracketing/chandrapatlu.jl +++ b/src/Bracketing/chandrapatlu.jl @@ -3,7 +3,7 @@ Use [Chandrapatla's algorithm](https://doi.org/10.1016/S0965-9978(96)00051-8) -(cf. [Scherer](https://www.google.com/books/edition/Computational_Physics/cC-8BAAAQBAJ?hl=en&gbpv=1&pg=PA95&printsec=frontcover) +(cf. [Scherer](https://www.google.com/books/edition/Computational_Physics/cC-8BAAAQBAJ?hl=en&gbpv=1&pg=PA95&printsec=frontcover)) to solve ``f(x) = 0``. Chandrapatla's algorithm chooses between an inverse quadratic step or a bisection step based on a computed inequality. diff --git a/src/Bracketing/itp.jl b/src/Bracketing/itp.jl index 988a1f0..9e0119e 100644 --- a/src/Bracketing/itp.jl +++ b/src/Bracketing/itp.jl @@ -18,8 +18,7 @@ of `κ₂` is `2`, and the default value of `n₀` is `1`. Suggested on [discourse](https://discourse.julialang.org/t/julia-implementation-of-the-interpolate-truncate-project-itp-root-finding-algorithm/77739) -by `@TheLateKronos`, who supplied the original version of the code -below. +by `@TheLateKronos`, who supplied the original version of the code. """ struct ITP{T,S} <: AbstractBracketingMethod diff --git a/src/Derivative/newton.jl b/src/Derivative/newton.jl index abf932e..45fdc5f 100644 --- a/src/Derivative/newton.jl +++ b/src/Derivative/newton.jl @@ -13,7 +13,7 @@ end Implements Newton's [method](http://tinyurl.com/b4d7vls): `xᵢ₊₁ = xᵢ - f(xᵢ)/f'(xᵢ)`. This is a quadratically convergent method requiring -one derivative. Two function calls per step. +one derivative and two function calls per step. ## Examples