diff --git a/collections/_posts/2025-01-18-GradientStepDenoiser.md b/collections/_posts/2025-01-18-GradientStepDenoiser.md index daf69103..1e9eea81 100755 --- a/collections/_posts/2025-01-18-GradientStepDenoiser.md +++ b/collections/_posts/2025-01-18-GradientStepDenoiser.md @@ -124,7 +124,7 @@ What we really want is the best of both worlds: - good reconstructions using neural networks - some constraints with respect to the observations -Plug-and-Play methods are exactly this compromise. They use the traditionnal variational formulation of inverse problems and replace the hand-crafted regularization $g$ by a Gaussian denoiser $$D_\sigma$$. +Plug-and-Play methods are exactly this compromise. They use the traditionnal variational formulation of inverse problems and replace the hand-crafted regularization $$g$$ by a Gaussian denoiser $$D_\sigma$$. Let's take the forward-backward splitting: it's Plug-and-Play version now simply becomes: $$x^{n+1} = D_\sigma \circ \left( \text{Id} - \tau \nabla f \right) (x^{n})$$