Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report of reviewer #2 #3

Open
NelleV opened this issue Mar 11, 2024 · 0 comments
Open

Report of reviewer #2 #3

NelleV opened this issue Mar 11, 2024 · 0 comments
Labels

Comments

@NelleV
Copy link
Collaborator

NelleV commented Mar 11, 2024

Associate Editor: Nelle Varoquaux
Reviewer : choose to remain anonymous

Reviewer: Reviewing history

  • Paper submitted March 09, 2023
  • Reviewer invited March 27th, 2023
  • Review 1 received June 8th, 2023
  • Paper revised October 02, 2023
  • Reviewer invited October 03, 2023
  • Review 2 received November 30th, 2023
  • Reviewer invited October 03, 2023
  • Review received November 30th, 2023
  • Paper conditionally accepted February 9th, 2024

First round

The scope of the paper is quite specific since it adresses the issue of importance sampling in high dimension solely in the special case of a gaussian target measure. While issues raised by importance sampling in high dimension is generic, the solution propose in this paper strongly relies on that gaussian assumption. Despite a somehow narrow framework, the solution proposed is innovative. The theoretical results are easily interpretable from the importance sampling perspective as they can relate to the effective sample size (and hence the efficency of the procedure). The numerical part is well documented.

We thank the Reviewer for his/her encouraging comments.

If the paper should go under revision before being resubmitted, I would suggest to
clarify a point. If one allows to sample from $g^⋆$ to estimate $m^⋆$ and $\Sigma^⋆$ why would one compute the estimator based on a iid sample from $\mathcal{N}(\hat{m}^⋆, \hat{\Sigma}^⋆)$, rather than the selfnormalised importance sampling stimator based on $g^⋆$? What is the gain in terms of the variance of the estimators?
I found that point could be made clearer for the overall understanding of the paper (beginning of section 3.1) though the benefit of the solution in terms of estimating the covariance matrix is well shown. (Obviously this remark is more about how authors motivate their contribution rather than a mathematical challenge).

We thank the Reviewer for raising this point. Self normalized importance sampling (SNIS) is more adapted to the estimation of $\mathcal{E} = \int \phi f$ when $f$ is unknown up to a constant. Moreover, as we elaborate below, it is also not appropriate for the integral considered in the article when $\phi$ is an indicator function because of issues of support of the distributions involved. Since we are primarily interested in reliability problems, this is a very strong limitation to this approach and the reason why we do not consider it in the paper. Nonetheless, we can always compute the SNIS estimate $\widehat{\mathcal{E}^{SNIS}}$ of $\mathcal{E}$ in the case where $f$ is known and $g$ is an auxiliary distribution:
$\widehat{\mathcal{E}^{SNIS}} = \left( \sum (\phi f/g)({\bf X}_i) \right) \bigg / \left( \sum (f/g)({\bf X}_i) \right),$
with the ${\bf X}_i$'s i.i.d.\ $\sim g$.
This estimate $\widehat{\mathcal{E}^{SNIS}}$ converges to $\mathcal{E}$ if the support of $g$ includes the support of $g^⋆= \frac{\phi f}{\mathcal{E}}$ and the support of $f$. If $g=g^⋆$, this last condition is not always fulfilled: in particular, it is not the case when $\mathcal{E}$ is a probability such as in test case 1, 2 and 4 of the article. When it is fulfilled, the computation of $\widehat{\mathcal{E}^{SNIS}}$ with $g=g^⋆$ is given by
$\widehat{\mathcal{E}^{SNIS}} = \left( \sum (\frac{\phi f}{g^⋆})({\bf X}_i) \right) \bigg / \left( \sum (\frac{f}{g^⋆})({\bf X}_i) \right)$
$= \left( \sum (\frac{\phi f\mathcal{E} }{\phi f})({\bf X}_i) \right) \bigg / \left( \sum (\frac{f \mathcal{E}}{\phi f})({\bf X}_i) \right)$\
$= N \mathcal{E} \bigg / \mathcal{E} \left( \sum (\frac{1 }{\phi})({\bf X}_i) \right)$\
$= 1 \bigg / \left( \frac{1}{N} \sum (\frac{1 }{\phi})({\bf X}_i) \right)$
with the ${\bf X}_i$'s i.i.d.\ $\sim g^⋆$. The accuracy of $\widehat{\mathcal{E}^{SNIS}}$ depends probably on the behaviour of $\frac{1 }{\phi}$ on the support of $g^⋆$. A perspective to this work is thus the study of the mean square error of the $\widehat{\mathcal{E}^{SNIS}}$ estimate. It can be interesting to compare it to the mean square error of the proposed estimates.

Second round of review

Thank you to the authors for their response. I’m in favor of accepting the paper.

@NelleV NelleV added the review label Mar 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant