Lab4. Image restoration

2. Algorithm Outline for Anisotropic Diffusion

\[ \frac{\partial u}{\partial t} = \mathrm{div}(D \nabla u) \]

Expand the divergence form in (1) and make the appropriate simplification to obtain the Trace-based diffusion equation.

Under the assumption that \(D\) varies slowly it can be lifted outside the divergence operation. Lecture 14 slide 50 gives a simplification and EDUPACK page 27 derives a scalar product for matrices based on the Frobenius norm and writes it in terms of a trace. This gives:

\[ \frac{\partial u}{\partial t} = \mathrm{div}(D \nabla u) \approx \langle D|\nabla \nabla^T \rangle u = \mathrm{tr}[D H] \]

where \(H\) is the Hessian of \(u\).

What is the definition of the structure tensor and how do you obtain the diffusion tensor \(D\) from it?

Lecture 3 slide 5 and Lecture 13 slide 10 defines the structure tensor as the (smoothed) outer product of the gradients:

\[ T = (\nabla f) (\nabla f)^T = \begin{bmatrix} \left(\frac{\partial f}{\partial x}\right)^2 & \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \\ \frac{\partial f}{\partial y} \frac{\partial f}{\partial x} & \left(\frac{\partial f}{\partial y}\right)^2 \\ \end{bmatrix} \]

and the diffusion tensor as the matrix with the same eigenvectors but with eigenvalues changed according to

\[ \alpha_k = e^{-\lambda_k / m} \]

where \(m\) is a parameter.

What result do you expect if you apply the method to an image that does not contain any noise?

Ideally the original image would remain (which is maybe the case with \(m = 0\)) but probably small details will be lost. For \(m \neq 0\) and infinite iterations (i.e. the analytical solution, assuming convergence) the image would be completely determined by the boundary conditions and totally smoothed out.

2.1. Implementation

Which values appear to work best for your parameters? How sensitive are they to variations?

Ballpark reasonable for snr = 10:

  • m = 0.2
  • step = 0.01
  • iterations = 100

2.2. Parameter tuning

Which parameters are not specified by the equations?

Parameters that appear in the equations are m.

Parameters that do not appear in the equations are the ones related to numerically solving the differential equation, i.e. step and iterations.

2.3. Evaluation

Apply the method onto real images with additive noise of 3 different SNRs. How many iterations are needed to get an acceptable result for different noise levels?

With m = 0.1 and step = 0.1:

snr iterations
10 50
15 18
20 10

What happens if we use many more iterations than necessary? Does the result converge to the noise free image? Explain the result.

No, it converges to a constant (the mean of the entire image if circular convolution is used, zero if zeros are convolved in?).

Apply the method onto an image with multiplicative noise. What is the main difference in the result compared to additive noise? Explain the result.

The noise is much more disruptive and harder to deal with.

3. Inpainting via Total Variation

What do the parameters \(\alpha\) and \(\lambda\) represent and what are suitable values? What number of iterations is sufficient to obtain the desired result?

Parameter Meaning Good value
\(\alpha\) Step size 0.01
\(\lambda\) Weight between difference to original and variation in result 2

What is the expected behavior of the algorithm in the two regions \(\Gamma\) and \(\Omega \setminus \Gamma\)?

  • \(\Gamma\): Result close to corresponding input image pixel.
  • \(\Omega \setminus \Gamma\): Result (more or less) constant, i.e. close to nearby input image pixels.

(EXTRA) The operator \(\mathcal{X}_{\Omega \setminus \Gamma}\) defines the observed pixels. The operator can be interpreted as a projective operator \(P\) such that \(\mathcal{X}_{\Omega \setminus \Gamma} = \mathcal{P}^∗ \mathcal{P}\) where \(\mathcal{P}^∗\) is the adjoint. Can you give an example of such an operator?