-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jacobi elliptic functions and complete elliptic integral of the first kind #79
base: master
Are you sure you want to change the base?
Conversation
Thanks! Is this based on the mpmath code? If so, that's fine as it's a compatible licence, but we need to acknowledge the original copyright. |
@@ -71,7 +71,12 @@ end | |||
export sinint, | |||
cosint | |||
|
|||
export ellipj, | |||
jss,jsc,jsd,jsn,jcs,jcc,jcd,jcn,jds,jdc,jdd,jdn,jns,jnc,jnd,jnn, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are all these exports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see, they're generated by macros. The names are a bit cryptic: are these standard? Maybe prefix them (elliptic_jss
) or stick them in a module (Elliptic.jss
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are 16 Jacobi elliptic functions, namely all pairs of the letters scdn. Since cd
is already taken, I decided to put a j
(for Jacobi) in front of the function to avoid name collision. Elliptic.jl solved this issue by putting these functions into a module Jacobi
, but I'd say jsn(u,m)
is more convenient than Jacobi.sn(u,m)
. Might be worth discussing, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ok, how about jacobisn
? This matches nicely with besselk
/besselj
, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
jacobisn
possibly makes it hard to distinguish the functions, and it wouldn't really save much typing compared to Jacobi.sn
. It might be worth to get input from someone who uses these functions regularly, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at other packages:
- Mathematica:
JacobiSN
- Matlab & SciPy provides them all at once via
ellipj
function. - GSL provides them all via
gsl_sf_elljac_e
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might also want to rename the K
and iK
functions. It's just too much of a hassle not to be able to use K
for anything else. For reference:
- Mathematica:
EllipticK
- Matlab:
ellipticK
- Scipy:
ellipk
- GSL:
gsl_sf_ellint_Kcomp
I'd vote forellipK
, as the K is usually uppercase in mathematical notation. Definitely notellipticK
unless we renameellipj
.
Also, I am a bit unsure about providing iK
and how it should be named. If you look at the code, it definitely makes sense to provide iK
because K(m)
is really just iK(1-m)
and so K(1-m) = iK(1-(1-m))
which is just stupid. But AFAIK no other software package provides this, which means I might be doing something wrong. And the commonly used mathematical notation for K(1-m)
seems to be K', but obviously we can't use that so we need to think of something else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd still prefer to rename the functions to longer names.
I skimmed through the mpmath codes (and a few others), but at most I took ideas, no code. |
src/elliptic.jl
Outdated
#---------------- | ||
# Pick algorithm | ||
|
||
Base.@pure puresqrt(x) = sqrt(x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't sqrt
already pure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it isn't for BigFloat
, but that is intentional, since we can change precision via setprecision
. Using @pure
here is incorrect, and will cause problems when computing at different precisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It also isn't for the FloatXX
types:
julia> foo(T) = Val{sqrt(eps(T))}()
foo (generic function with 1 method)
julia> @code_warntype foo(Float64)
Variables:
#self# <optimized out>
T <optimized out>
Body:
begin
return ((Core.apply_type)(Main.Val, (Base.Math.sqrt_llvm)(2.220446049250313e-16)::Float64)::Type{Val{_}} where _)()::Val{_} where _
end::Val{_} where _
julia> foo(T) = Val{puresqrt(eps(T))}()
foo (generic function with 1 method)
julia> @code_warntype foo(Float64)
Variables:
#self# <optimized out>
T <optimized out>
Body:
begin
return $(QuoteNode(Val{1.4901161193847656e-8}()))
end::Val{1.4901161193847656e-8}
But I agree, I should have done
puresqrt(x) = sqrt(x)
Base.@pure puresqrt(x::Union{Float16,Float32,Float64}) = sqrt(x)
With this, I get
julia> foo(T) = Val{puresqrt(eps(T))}()
foo (generic function with 1 method)
julia> @code_warntype foo(Float64)
Variables:
#self# <optimized out>
T <optimized out>
Body:
begin
return $(QuoteNode(Val{1.4901161193847656e-8}()))
end::Val{1.4901161193847656e-8}
julia> @code_warntype foo(BigFloat)
Variables:
#self# <optimized out>
T::Type{BigFloat}
Body:
begin
SSAValue(0) = $(Expr(:invoke, MethodInstance for eps(::Type{BigFloat}), :(Main.eps), :(T)))
return ((Core.apply_type)(Main.Val, $(Expr(:invoke, MethodInstance for sqrt(::BigFloat), :(Main.sqrt), SSAValue(0))))::Type{Val{_}} where _)()::Val{_} where _
end::Val{_} where _
Just out of curiosity, is there any fundamental obstacle to do this in Base, and possible for all numerical functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, maybe it would be better to just do
Base.@pure puresqrt(x) = Float64(sqrt(x))
Then puresqrt
is pure, and since the nsteps
computations are done in Float64
anyway, I don't lose anything in terms of generality.
Overall, this looks pretty good to me. It would be nice to have a few more comments, particularly as to how the generated functions work and why they're needed. I'm sure @stevengj will have some thoughts as well. |
Actually, I am still slightly hesitant about all the generated functions. It would be nice if we could reduce their use a bit.
|
The main reason for the generated functions is to avoid heap allocation. The structure of the algorithms is to first compute
Regarding the instability of There is a slight hick-up in that currently we compute the number of Landen steps twice for BigFloats falling into the last two branches of |
What I meant was function ellipj_growm(sn,cn,dn, k)
# [1, Sec 16.12] / https://dlmf.nist.gov/22.7.i
for kk in reverse(k)
sn,cn,dn = (1+kk)*sn/(1+kk*sn^2), cn*dn/(1+kk*sn^2), (1-kk*sn^2)/(1+kk*sn^2)
# ^ Use [1, 16.9.1]. Idea taken from [2]
end
return sn,cn,dn
end Basically, generated functions should be considered a weapon of last resort: they are powerful, but do add considerable complexity, both for the compiler (i.e. they make precompilation more difficult) and for humans reading the code. As for |
Right, I agree I overused I also agree that there probably isn't much point in avoiding heap allocations for |
The only remaining issues with this PR seem to be about the names of the functions. Currently, we have
Some more feedback regarding this would be helpful. There further was some discussion regarding my use of generated functions and the |
Is there a timeline for this to merge? |
I couldn't find an appropriate place in the code to comment on this so I'll do it here if thats okay. Currently the algorithm doesn't know to use the analytic continuation for real valued arguments > 1: julia> ellipK(2.0)
ERROR: DomainError:
Stacktrace:
[1] ellipiK_agm(::Float64) at /Users/mason/.julia/v0.6/SpecialFunctions/src/elliptic.jl:207
[2] ellipK(::Float64) at /Users/mason/.julia/v0.6/SpecialFunctions/src/elliptic.jl:214 whereas giving a complex argument will cause it to use the correct analytic continuation julia> ellipK(2.0+0im)
1.3110287771460596 - 1.3110287771460596im Is this intended behaviour? |
This is intentional and consistent with
We could and possible should print a similar error message for |
Repeating Mason Protter's question: Is there a timeline for this to merge? |
If you ever do 2nd kind elliptic integrals, please tag me, so I can benchmark and compare with https://github.com/nolta/Elliptic.jl , which is my current dependency. (assuming of course they ever get merged) |
Once again, is there any timeline on getting this merged? If not, why? |
bump |
Base.@pure function nsteps(m,ε) | ||
i = 0 | ||
while abs(m) > ε | ||
m = descstep(m)^2 | ||
i += 1 | ||
end | ||
return i | ||
end | ||
Base.@pure nsteps(ε,::Type{<:Real}) = nsteps(0.5,ε) # Guarantees convergence in [-1,0.5] | ||
Base.@pure nsteps(ε,::Type{<:Complex}) = nsteps(0.5+sqrt(3)/2im,ε) # This is heuristic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't think the @pure
s here are necessary (unless you have benchmarks suggesting otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These functions aren’t pure anyways...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, this needs to be looked into.
# For all FloatXX types, this can be done at compile time, while for | ||
# BigFloat this has to be done at runtime. | ||
T = promote_type(typeof(u),typeof(m)) | ||
ε = puresqrt(Float64(eps(real(typeof(m))))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ε = puresqrt(Float64(eps(real(typeof(m))))) | |
ε = sqrt(Float64(eps(real(typeof(m))))) |
should be sufficient in most cases.
Here's a list of remaining issues with this PR:
Regarding the last point: the advantage of this implementation is that it supports complex arguments and arbitrary number types. On the other hand, #135 provides a uniform implementation for both Sorting this out might take a couple of days to read through the literature, benchmark, maybe implement If anyone is willing to tackle these issues, please feel free to do so. I'd be happy to give a hand, but I won't invest any further time unless there's a clear path to how this PR might finally get merged. |
Sorry I haven't chimed in here, but @simonbyrne's comments all seem pretty reasonable to me and I had been waiting fo them to be addressed before I looked at this PR closely. |
Base.@pure function nsteps(m,ε) | ||
i = 0 | ||
while abs(m) > ε | ||
m = descstep(m)^2 | ||
i += 1 | ||
end | ||
return i | ||
end | ||
Base.@pure nsteps(ε,::Type{<:Real}) = nsteps(0.5,ε) # Guarantees convergence in [-1,0.5] | ||
Base.@pure nsteps(ε,::Type{<:Complex}) = nsteps(0.5+sqrt(3)/2im,ε) # This is heuristic. | ||
function ellipj_nsteps(u,m) | ||
# Compute the number of Landen steps required to reach machine precision. | ||
# For all FloatXX types, this can be done at compile time, while for | ||
# BigFloat this has to be done at runtime. | ||
T = promote_type(typeof(u),typeof(m)) | ||
ε = puresqrt(Float64(eps(real(typeof(m))))) | ||
N = nsteps(ε,typeof(m)) | ||
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T} | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi. Sorry if this isn't in the correct place. I was hoping to this would be finished. From my reading of this pull request the major stumbling block in this pull request are these lines that use a lot of @pure
. When I tried to test this I found that the @pure
where needed to prevent dynamic dispatch. I tried to modify the code a bit to prevent this and wound up at the following
Base.@pure function nsteps(m,ε) | |
i = 0 | |
while abs(m) > ε | |
m = descstep(m)^2 | |
i += 1 | |
end | |
return i | |
end | |
Base.@pure nsteps(ε,::Type{<:Real}) = nsteps(0.5,ε) # Guarantees convergence in [-1,0.5] | |
Base.@pure nsteps(ε,::Type{<:Complex}) = nsteps(0.5+sqrt(3)/2im,ε) # This is heuristic. | |
function ellipj_nsteps(u,m) | |
# Compute the number of Landen steps required to reach machine precision. | |
# For all FloatXX types, this can be done at compile time, while for | |
# BigFloat this has to be done at runtime. | |
T = promote_type(typeof(u),typeof(m)) | |
ε = puresqrt(Float64(eps(real(typeof(m))))) | |
N = nsteps(ε,typeof(m)) | |
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T} | |
end | |
Base.@pure function nsteps(M) | |
m = _convfac(M) | |
ε = _vareps(M) | |
i = 0 | |
while abs(m) > ε | |
m = descstep(m)^2 | |
i += 1 | |
end | |
return i | |
end | |
@inline _convfac(::Type{<:Real}) = 0.5 | |
@inline _convfac(::Type{<:Complex}) = 0.5 + sqrt(3)/2im | |
@inline _vareps(::Type{<:Complex{T}}) where {T} = Float64(sqrt(eps(T))) | |
@inline _vareps(T::Type{<:Real}) = Float64(sqrt(eps(T))) | |
function ellipj_nsteps(u,m) | |
# Compute the number of Landen steps required to reach machine precision. | |
# For all FloatXX types, this can be done at compile time, while for | |
# BigFloat this has to be done at runtime. | |
T = promote_type(typeof(u),typeof(m)) | |
N = nsteps(T) | |
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T} | |
end |
This seems to work in my testing. For versions < 1.7 I still needed the @pure
to prevent dynamic dispatch. I am not sure if this usage of @pure
is alright, but it seems to be more inline with what I read in the docs. The good news is that for 1.7 everything infers correctly so the @pure
is not needed.
What's the chances that this gets implemented soonish? I can try to take over this pull request @ettersi if any additional work is needed. I just need this implementation of the jacobi functions for some research I am doing.
Implements the Jacobi elliptic functions and the complete elliptic integral of the first kind.
Both sets of functions work for real and complex arguments as well as all
FloatXX
andBigFloat
types. The test sets involve comparison with Python's mpmath library (http://mpmath.org/) as well as checks that the results are within a feweps()
of the exact result.