From 330b2936f2738b0405aecc5af19503e044aad048 Mon Sep 17 00:00:00 2001 From: Thibaut Modrzyk <52449813+Tmodrzyk@users.noreply.github.com> Date: Thu, 23 Jan 2025 23:09:13 +0100 Subject: [PATCH] Update collections/_posts/2025-01-18-GradientStepDenoiser.md Co-authored-by: Nathan Painchaud <23144457+nathanpainchaud@users.noreply.github.com> --- collections/_posts/2025-01-18-GradientStepDenoiser.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/collections/_posts/2025-01-18-GradientStepDenoiser.md b/collections/_posts/2025-01-18-GradientStepDenoiser.md index 1e9eea81..46ee9815 100755 --- a/collections/_posts/2025-01-18-GradientStepDenoiser.md +++ b/collections/_posts/2025-01-18-GradientStepDenoiser.md @@ -111,7 +111,7 @@ There are a lot of other algorithms using the proximal operator, but this is the # Plug-and-Play -Let us consider an ill-posed linear inverse problem, for instance super-resolution, deblurring or inpainting. TV regularization was good looking 15 years ago, but now the reconstructions look cartoonish when compared to what end-to-end deep learning approach can achieve. +Let us consider an ill-posed linear inverse problem, for instance super-resolution, deblurring or inpainting. TV regularization was good looking 15 years ago, but now the reconstructions look cartoonish when compared to what end-to-end deep learning approaches can achieve. ![Example with TV regularization](/collections/images/GradientStep/tv.jpg)