You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Associate Editor: Nelle Varoquaux
Reviewer : choose to remain anonymous
Reviewer: Reviewing history
Paper submitted March 09, 2023
Reviewer invited March 27th, 2023
Review 1 received June 8th, 2023
Paper revised October 02, 2023
Reviewer invited October 03, 2023
Review 2 received November 30th, 2023
Reviewer invited October 03, 2023
Review received November 30th, 2023
Paper conditionally accepted February 9th, 2024
First round
The scope of the paper is quite specific since it adresses the issue of importance sampling in high dimension solely in the special case of a gaussian target measure. While issues raised by importance sampling in high dimension is generic, the solution propose in this paper strongly relies on that gaussian assumption. Despite a somehow narrow framework, the solution proposed is innovative. The theoretical results are easily interpretable from the importance sampling perspective as they can relate to the effective sample size (and hence the efficency of the procedure). The numerical part is well documented.
We thank the Reviewer for his/her encouraging comments.
If the paper should go under revision before being resubmitted, I would suggest to
clarify a point. If one allows to sample from $g^⋆$ to estimate $m^⋆$ and $\Sigma^⋆$ why would one compute the estimator based on a iid sample from $\mathcal{N}(\hat{m}^⋆, \hat{\Sigma}^⋆)$, rather than the selfnormalised importance sampling stimator based on $g^⋆$? What is the gain in terms of the variance of the estimators?
I found that point could be made clearer for the overall understanding of the paper (beginning of section 3.1) though the benefit of the solution in terms of estimating the covariance matrix is well shown. (Obviously this remark is more about how authors motivate their contribution rather than a mathematical challenge).
We thank the Reviewer for raising this point. Self normalized importance sampling (SNIS) is more adapted to the estimation of $\mathcal{E} = \int \phi f$ when $f$ is unknown up to a constant. Moreover, as we elaborate below, it is also not appropriate for the integral considered in the article when $\phi$ is an indicator function because of issues of support of the distributions involved. Since we are primarily interested in reliability problems, this is a very strong limitation to this approach and the reason why we do not consider it in the paper. Nonetheless, we can always compute the SNIS estimate $\widehat{\mathcal{E}^{SNIS}}$ of $\mathcal{E}$ in the case where $f$ is known and $g$ is an auxiliary distribution: $\widehat{\mathcal{E}^{SNIS}} = \left( \sum (\phi f/g)({\bf X}_i) \right) \bigg / \left( \sum (f/g)({\bf X}_i) \right),$
with the ${\bf X}_i$'s i.i.d.\ $\sim g$.
This estimate $\widehat{\mathcal{E}^{SNIS}}$ converges to $\mathcal{E}$ if the support of $g$ includes the support of $g^⋆= \frac{\phi f}{\mathcal{E}}$ and the support of $f$. If $g=g^⋆$, this last condition is not always fulfilled: in particular, it is not the case when $\mathcal{E}$ is a probability such as in test case 1, 2 and 4 of the article. When it is fulfilled, the computation of $\widehat{\mathcal{E}^{SNIS}}$ with $g=g^⋆$ is given by $\widehat{\mathcal{E}^{SNIS}} = \left( \sum (\frac{\phi f}{g^⋆})({\bf X}_i) \right) \bigg / \left( \sum (\frac{f}{g^⋆})({\bf X}_i) \right)$ $= \left( \sum (\frac{\phi f\mathcal{E} }{\phi f})({\bf X}_i) \right) \bigg / \left( \sum (\frac{f \mathcal{E}}{\phi f})({\bf X}_i) \right)$\ $= N \mathcal{E} \bigg / \mathcal{E} \left( \sum (\frac{1 }{\phi})({\bf X}_i) \right)$\ $= 1 \bigg / \left( \frac{1}{N} \sum (\frac{1 }{\phi})({\bf X}_i) \right)$
with the ${\bf X}_i$'s i.i.d.\ $\sim g^⋆$. The accuracy of $\widehat{\mathcal{E}^{SNIS}}$ depends probably on the behaviour of $\frac{1 }{\phi}$ on the support of $g^⋆$. A perspective to this work is thus the study of the mean square error of the $\widehat{\mathcal{E}^{SNIS}}$ estimate. It can be interesting to compare it to the mean square error of the proposed estimates.
Second round of review
Thank you to the authors for their response. I’m in favor of accepting the paper.
The text was updated successfully, but these errors were encountered:
Associate Editor: Nelle Varoquaux
Reviewer : choose to remain anonymous
Reviewer: Reviewing history
First round
The scope of the paper is quite specific since it adresses the issue of importance sampling in high dimension solely in the special case of a gaussian target measure. While issues raised by importance sampling in high dimension is generic, the solution propose in this paper strongly relies on that gaussian assumption. Despite a somehow narrow framework, the solution proposed is innovative. The theoretical results are easily interpretable from the importance sampling perspective as they can relate to the effective sample size (and hence the efficency of the procedure). The numerical part is well documented.
If the paper should go under revision before being resubmitted, I would suggest to$g^⋆$ to estimate $m^⋆$ and $\Sigma^⋆$ why would one compute the estimator based on a iid sample from $\mathcal{N}(\hat{m}^⋆, \hat{\Sigma}^⋆)$ , rather than the selfnormalised importance sampling stimator based on $g^⋆$ ? What is the gain in terms of the variance of the estimators?
clarify a point. If one allows to sample from
I found that point could be made clearer for the overall understanding of the paper (beginning of section 3.1) though the benefit of the solution in terms of estimating the covariance matrix is well shown. (Obviously this remark is more about how authors motivate their contribution rather than a mathematical challenge).
Second round of review
Thank you to the authors for their response. I’m in favor of accepting the paper.
The text was updated successfully, but these errors were encountered: