Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no. of enn to do mode ID on needs to be data driven #26

Open
nielsenmb opened this issue May 1, 2019 · 9 comments
Open

no. of enn to do mode ID on needs to be data driven #26

nielsenmb opened this issue May 1, 2019 · 9 comments
Milestone

Comments

@nielsenmb
Copy link
Collaborator

At the moment it's manually set when doing jar.star.asymptotic_modeid(norders = X).

It should be data driven, at least as an initial rough estimate. The best-fit asymptotic relation can then probably be extrapolated to another 1-2 radial orders on either side of numax, subject to some H0 test?

@grd349 grd349 added this to the TASC/KASC milestone May 21, 2019
@grd349
Copy link
Owner

grd349 commented May 21, 2019

This is an important issue. Are we able to take some guide from the stars that we are running through ASY_peakbag now.

This is probably a very strong function numax (& dnu?) and should probably be prior driven rather than data driven.

@nielsenmb
Copy link
Collaborator Author

It's also going to be a strong function of apparent magnitude and observation length I think, right? Even if we get a prior based on the targets we're running through now, so something that's a function numax/dnu/Gmag/Tobs, it'll still vary between missions. A Gmag = 10 star will have a very different SNR in Kepler and TESS.

@grd349
Copy link
Owner

grd349 commented May 22, 2019

Yes - you are correct. The detectable modes will be very different for different magnitudes.

I suggest a pragmatic approach ... As we know the modes are 'present', just not detectable, in all snr cases let's just fit the range as if it were high snr. This will be inefficient in terms of extra computation that isn't really necessary but a while lot easier to implement.

Doers that make sense (pragmatic sense)?

@nielsenmb
Copy link
Collaborator Author

Hmm...so we fit a low snr spectrum as if it was high snr, pass the output to peakbagging.py and then evaluate the detection probability of each mode? Or would you do the detection probabilities between asy_peakbag and peakbagging?

@grd349
Copy link
Owner

grd349 commented May 22, 2019

Judging by your Hmm ... you are sceptical :) That's OK. I'm not so hung up in the detection probability for each mode, maybe a little bit for PLATO but for now that can wait. For the most part we now the modes are there and so accounting for them in the model, even if they are not detectable, is no great pain for me. For the ASY_peakbag stage it's not a big deal if we fit too many modes - just some extra time on the CPU.

For the main peakbagging you are right - we need to decide what to include. We will already know something about whether or not a mode has been detected - the posteriors will be broad (basically only constrained by the pattern) and the SNR will be low. We could make decisions based on this (at least preliminary decisions).

We could also run the mixture model peak bagging as either an intermediate or perhaps even final step. For PLATO it is probably an intermediate step.

We should run some tests at some point to see if having too many modes in the final fit is a bad thing - my feeling is probably not so much if we have sensible priors (height/linewidth/frequency). The posterior we get back would just be the prior which sounds OK to me.

@nielsenmb
Copy link
Collaborator Author

It's mostly the computational time aspect I was worried about! I agree, throwing it through asy_peakbag will not cost us anything (maybe marginally, since we'll be fitting a slightly wider frequency range).

I think if we put in a white noise level term, then that in combination with the envelope height + width should take care of the cases where we have requested too wide a frequency range. The outlying cases will just have a very low SNR. Then we just set a threshold in SNR. And I'm guessing it won't really bias the fit either, the information gained from the mode near numax, will still dominate the likelihood function. I'm just going by intuition here though.

However, in a high SNR case won't the surface effect be an issue if we fit too wide a range? That isn't captured by the curvature term is it?

@grd349
Copy link
Owner

grd349 commented May 22, 2019

Surface effect - in data?

I guess you mean glitches (HeII & BCZ). The model we use does not capture this. These are relatively small perturbations on top of the current model.

@nielsenmb
Copy link
Collaborator Author

nielsenmb commented May 22, 2019 via email

@grd349
Copy link
Owner

grd349 commented May 23, 2019

For the typically observed frequencies in other stars, a surface term (but that really only refers to the deficiency of the models) is not needed, or possibly more correctly is sucked up by a modified Dnu and a slight difference in curvature. The surface term at low frequency would probably be evident but we rarely see these low-frequency modes.

Have a look at Mikkel's Legacy paper to see how well the model we fit does with real data. I think the limit is much more likely the glitches.

@grd349 grd349 modified the milestones: TASC/KASC, Version 2.0 Jul 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants