From 0552f4760e915643b4b1aa2cfa3567c3271d38f5 Mon Sep 17 00:00:00 2001 From: Thibaut Modrzyk <52449813+Tmodrzyk@users.noreply.github.com> Date: Thu, 23 Jan 2025 11:33:46 +0100 Subject: [PATCH] Update collections/_posts/2025-01-18-GradientStepDenoiser.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Pierre Rougé <62425886+PierreRouge@users.noreply.github.com> --- collections/_posts/2025-01-18-GradientStepDenoiser.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/collections/_posts/2025-01-18-GradientStepDenoiser.md b/collections/_posts/2025-01-18-GradientStepDenoiser.md index daf69103..1e9eea81 100755 --- a/collections/_posts/2025-01-18-GradientStepDenoiser.md +++ b/collections/_posts/2025-01-18-GradientStepDenoiser.md @@ -124,7 +124,7 @@ What we really want is the best of both worlds: - good reconstructions using neural networks - some constraints with respect to the observations -Plug-and-Play methods are exactly this compromise. They use the traditionnal variational formulation of inverse problems and replace the hand-crafted regularization $g$ by a Gaussian denoiser $$D_\sigma$$. +Plug-and-Play methods are exactly this compromise. They use the traditionnal variational formulation of inverse problems and replace the hand-crafted regularization $$g$$ by a Gaussian denoiser $$D_\sigma$$. Let's take the forward-backward splitting: it's Plug-and-Play version now simply becomes: $$x^{n+1} = D_\sigma \circ \left( \text{Id} - \tau \nabla f \right) (x^{n})$$