-
Notifications
You must be signed in to change notification settings - Fork 449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement svychisq for multiply-imputed data #236
Comments
@guilhermejacob could you: (1) add your code here with maybe two easily-runnable test cases |
Ok, I was able to replicate this example (I don't think it's great, but works). And I don't know how to integrate that so It will run on all Wald statistics. So answering your points in the previous comments: (1) If you want to git it a try run this:
(2) It takes the results statistics, calculates the variance of the statistics square roots across the (3) We have to think about that. The only real improvement of this function is minimal: it gets the dfs from the chi-square test. If you just supply it with the test, you could use other package like (4) Discussion ahead. |
The test was meant for cases when
and
The original article can be found here: Li et al. (1991). |
It is not clear to me the step: variances <- sqrt( unlist(results)) Could you please cite the formula in Li et al. (1991) that you are using to combine the results? I believe a better way of testing the independence in a two-way table could be to fit a loglinear model using the function svyloglin and testing if the interation effect is null. It might be possible to use MIcombine for that. |
I think it's equation 2.2 in page 74. I'm taking the square roots of every imputed result. |
are you using the formula in:
https://stats.stackexchange.com/questions/78479/how-to-run-chi-squared-test-on-imputed-data
with the correction?
…On Mon, May 29, 2017 at 3:55 PM, Guilherme Jacob ***@***.***> wrote:
I think it's equation 2.2 in page 74. I'm taking the square roots of every
imputed result.
The part inside the square brackets in 2.2 is the sample variance of the
square roots of each results.
Makes sense, right?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#236 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFD-p1LbyIiOpqYargztrDwfXSoj5nI4ks5r-xS7gaJpZM4NamHe>
.
|
Where is defined the distribution of the test statistic in 2.1. What are the number of degrees of freedom in the numerator and in the denominator of the F distribution? |
Yes, I'm using the correct formula. It's also the same in the books above. |
you mentioned that: The test was meant for the cases when m = 3 ? In your example k= (c-1)x(r-1)= 16 , the value of k has no influence in the applicability? |
It doesn't seem to be a problem from the books I read, so I don't think it's a problem. Section 2.4 in the article might give some additional information about this. |
It is good you believe in your books..., Multiple imputation is bit
misterious and magic to me...
For k=16 if the missingness fraction is high, m=3 sounds magic!
The test statistics I got by applying the function *test_fun* in the
example is -0.5042679 which is strange since the reference distribution
is F, whose support is the positive real line. Peharps you should truncate
the value when negative.
In the example you used the parameter statistic = "Chisq". Is it valid to
use
statistic = "Wald" ?
The chisq-test for complex surveys has to be corrected to be valid. The
Wald test is more natural because in its definition already uses the
correct covariance matrix estimated from the sample design.
…On Tue, May 30, 2017 at 2:39 PM, Guilherme Jacob ***@***.***> wrote:
It doesn't seem to be a problem from the books I read, so I don't think
it's a problem. Section 2.4 in the article might give some additional
information about this.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#236 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFD-pyoyI1-X9N5sNWCFh8A08qm6Cz4oks5r_FRRgaJpZM4NamHe>
.
|
Sometimes, it's a matter of faith haha |
I just said I'd rather use Wald's test! |
@DjalmaPessoa , you are absolutely right. (1) add your code here with maybe two easily-runnable test cases
It defaults to |
For pooled F-statistics, this should help: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4029775/pdf/nihms560910.pdf |
Example using Lumley's suggestion: library(mitools) but if we want to test if the interaction effect between sex and alcdos is null we need to get the p-values for the combined object. |
https://stats.stackexchange.com/questions/78479/how-to-run-chi-squared-test-on-imputed-data
i am not totally sure how to do it (one commenter says there's a mistake). mitools:::MIcombine.default does not properly deal with a list of htest objects (the
this_result
object below).if you think it's something you can implement, we can temporarily put it in lodown (which has lots of other dataset specific helper functions, like pnad_postStratify) and ask dr. lumley if he would prefer for it to go in library(mitools)? currently have two others implemented at: https://github.com/ajdamico/lodown/blob/master/R/survey_functions.R
thanks
The text was updated successfully, but these errors were encountered: