Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix tests and docs after introducing possibility of indicator=nothing #66

Merged
merged 13 commits into from
Jan 17, 2024

Conversation

JanJereczek
Copy link
Contributor

@JanJereczek JanJereczek commented Jan 10, 2024

This PR mainly fixes the tests and docs that broke after the introduction of passing indicators = nothing (#65).

Majors:

  • When introducing the possibility of passing nothing as an indicator, the need of complementing an if-statement in significant_transitions was overseen. This is now corrected such that the surrogate is correctly passed to the analysis pipeline regardless of the case.
  • "nothing indicator" in full_analysis.jl needs @test count(flags) == 2 since both indicators are positive at the transition.
  • The previous PR requires the number of change metrics to correspond to the number of indicators. This was corrected across all docs files.
  • Modification suggestion: let us require the tail to be of the same size as indicators and change_metrics too. This would be more consistent and easier to maintain. So far, I applied this to the surrogate analysis. If you agree, we can make the same change across all types of significance analysis.

Minors:

  • Updated compat for Documenter.jl s.t. >= 1. Note that with this version, I had to use warnonly = true since some referencing were not found. I tried to solve this, which only partially succeeded (I don't understand why the ref to e.g. PrecomputedLowfreqPowerSpectrum is not found).
  • The docs of Statistics.jl has many refs that were not found. To prevent this, I rather used the docs of StatsBase.jl, which are equally valid for that matter. I am sure there is a more elegant solution to that though. However, this makes me ask: we are we not simply using everything from StatsBase.jl? For instance also mean and var?
  • Updated compat for TimeseriesSurrogates.jl s.t. version with fixed memory allocation for Fourier surrogates is used.
  • In docs/make.jl, I simplified loading of example files. Before that we were loading by reading the directory, looping over it and then permute the vector to have the desired order. This is actually simpler to just write manually.
  • Shifted difference_of_maxes to src/ for potential future use.
  • Small corrections across documentation...

Project.toml Outdated
TimeseriesSurrogates = "2"
TimeseriesSurrogates = "^2.6.4"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This compat change doesn't make sense, unless you explicitly use in the source code a function that did not exist in version v2.0 and only exists in v2.6. The latest version will anyways be installed.

Comment on lines 18 to 20
[compat]
Documenter = "0.27"
ComplexityMeasures = "^2.7"
Documenter = "1"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should remove all compat entries from the docs.

docs/make.jl Outdated
"George Datseris <[email protected]>", warnonly = true, bib) # draft = true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does warnonly do? Also remove the commented out draft.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warnonly = true prevents the building of the docs from failing if some references are not found (in earlier versions of Documenter.jl this was the default). In our case, only few refs are not found and I do not understand why. We should fix this (your help is required here) and remove warnonly=true.

tseg_start = t_rasmussen[1:end-1] .+ 200
tseg_end = t_rasmussen[2:end] .- 200
config = SegmentedWindowConfig(indicators, change_metrics,
tseg_start, tseg_end; whichtime = last, width_ind = Int(200÷dt),
min_width_cha = 100) # require >=100 data points to estimate change metric
results = estimate_indicator_changes(config, r, t)
signif = SurrogatesSignificance(n = 10_000, tail = :right, rng = Xoshiro(1995))
signif = SurrogatesSignificance(n = 1000, tail = [:right, :right], rng = Xoshiro(1995))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since tail is the same for both you can just keep it as is right? No need to make [:tail, :tail].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a, ok, you enforced that tail must be the same length as the indicators.

@@ -30,15 +30,15 @@ Let us load data from a bistable nonlinear model subject to noise and to a gradu

with $x_{l}$ the state of the linear model, $x_{nl}$ the state of the bistable model, $f$ the forcing and $n$ the noise. For $f=0$ they both display an equilibrium point at $x=-1$. However, the bistable model also displays a further equilibrium point at $x=1$. Loading (and visualizing with [Makie](https://docs.makie.org/stable/)) such prototypical data to test some indicators can be done by simply running:

```@example tutorial
````@example tutorial
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incorrect amount of `

using TransitionsInTimeseries, CairoMakie

t, x_linear, x_nlinear = load_linear_vs_doublewell()
fig, ax = lines(t, x_linear)
lines!(ax, t, x_nlinear)
ax.title = "raw data"
fig
```
````
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incorrect amount of `

@@ -49,7 +49,7 @@ fig

The nonlinear system clearly displays a transition between two stability regimes. To forecast such transition, we analyze the fluctuations of the timeseries around the attractor, assumed to be tracked. Therefore, a detrending step is needed - here simply obtained by building the difference of the timeseries with lag 1.

```@example tutorial
````@example tutorial
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incorrect amount of `

@@ -58,7 +58,7 @@ fig, ax = lines(tfluct, x_l_fluct)
lines!(ax, tfluct, x_nl_fluct .+ 0.05)
ax.title = "input timeseries"
fig
```
````
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incorrect amount of `. Everywhere in the file this repeats.

=#

signif = SurrogatesSignificance(n = 1000, tail = :right)
signif = SurrogatesSignificance(n = 1000, tail = [:right, :right])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here we need only one tail, right?


The following blocks illustrate how the above extensive example is re-created in TransitionsInTimeseries.jl. But first, let's load the input timeseries:
The interface is simple, and directly parallelizes the [Workflow](@ref). It is based on the creation of a [`SurrogatesSignificance`](@ref), which contains a list of indicators, and corresponding metrics, to use for doing the above analysis. It also specifies what kind of surrogates to generate.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't correct. it should say SlidingWindowConfig.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, repeating information in the documentation is bad. This whole paragraph should become the sentence it originally was: The interface is simple, and directly parallelizes the [Workflow](@ref).

@codecov-commenter
Copy link

Codecov Report

Attention: 69 lines in your changes are missing coverage. Please review.

Comparison is base (971564b) 41.69% compared to head (3986a3a) 40.35%.
Report is 1 commits behind head on main.

Files Patch % Lines
ext/TITVisualizations.jl 0.00% 39 Missing ⚠️
src/significance/basic_stat_significance.jl 0.00% 15 Missing ⚠️
src/significance/surrogates_significance.jl 63.63% 8 Missing ⚠️
src/visualizations.jl 0.00% 4 Missing ⚠️
src/analysis/segmented_window.jl 0.00% 1 Missing ⚠️
src/analysis/sliding_window.jl 95.00% 1 Missing ⚠️
src/change_metrics/valuediff.jl 85.71% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #66      +/-   ##
==========================================
- Coverage   41.69%   40.35%   -1.35%     
==========================================
  Files          17       19       +2     
  Lines         331      399      +68     
==========================================
+ Hits          138      161      +23     
- Misses        193      238      +45     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Datseris Datseris merged commit 9943887 into main Jan 17, 2024
2 checks passed
@Datseris Datseris deleted the fix-indicator=nothing branch January 17, 2024 11:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants