elsa functionals

LinearResidual

template<typename data_t = real_t>
class elsa::LinearResidual

Class representing a linear residual, i.e. Ax - b with operator A and vectors x, b.

A linear residual is a vector-valued mapping $ \mathbb{R}^n\to\mathbb{R}^m $, namely $ x \mapsto Ax - b $, where A is a LinearOperator, b a constant data vector (DataContainer) and x a variable (DataContainer). This linear residual can be used as input to a Functional.

Author

  • Matthias Wieczorek - initial code

  • Tobias Lasser - modularization, modernization

Template Parameters
  • data_t: data type for the domain and range of the operator, default to real_t

Public Functions

LinearResidual(const DataDescriptor &descriptor)

Constructor for a trivial residual $ x \mapsto x $.

Parameters
  • [in] descriptor: describing the domain = range of the residual

LinearResidual(const DataContainer<data_t> &b)

Constructor for a simple residual $ x \mapsto x - b $.

Parameters
  • [in] b: a vector (DataContainer) that will be subtracted from x

LinearResidual(const LinearOperator<data_t> &A)

Constructor for a residual $ x \mapsto Ax $.

Parameters

LinearResidual(const LinearOperator<data_t> &A, const DataContainer<data_t> &b)

Constructor for a residual $ x \mapsto Ax - b $.

Parameters

~LinearResidual() = default

default destructor

const DataDescriptor &getDomainDescriptor() const

return the domain descriptor of the residual

const DataDescriptor &getRangeDescriptor() const

return the range descriptor of the residual

bool hasOperator() const

return true if the residual has an operator A

bool hasDataVector() const

return true if the residual has a data vector b

const LinearOperator<data_t> &getOperator() const

return the operator A (throws if the residual has none)

const DataContainer<data_t> &getDataVector() const

return the data vector b (throws if the residual has none)

DataContainer<data_t> evaluate(const DataContainer<data_t> &x) const

evaluate the residual at x and return the result

Return

result DataContainer (in the range of the residual) containing the result of the evaluation of the residual at x

Parameters

void evaluate(const DataContainer<data_t> &x, DataContainer<data_t> &result) const

evaluate the residual at x and store in result

Parameters

LinearOperator<data_t> getJacobian(const DataContainer<data_t> &x)

return the Jacobian (first derivative) of the linear residual at x. If A is set, then the Jacobian is A and this returns a copy of A. If A is not set, then an Identity operator is returned.

Return

a LinearOperator (the Jacobian)

Parameters

Private Members

std::unique_ptr<DataDescriptor> domainDesc_

Descriptor of domain.

std::unique_ptr<DataDescriptor> rangeDesc_

Descriptor of range.

std::unique_ptr<LinearOperator<data_t>> _operator = {}

the operator A, nullptr implies no operator present

std::optional<DataContainer<data_t>> _dataVector = {}

optional data vector b

Functional

template<typename data_t = real_t>
class elsa::Functional : public elsa::Cloneable<Functional<real_t>>

Abstract base class representing a functional, i.e. a mapping from vectors to scalars.

A functional is a mapping a vector to a scalar value (e.g. mapping the output of a Residual to a scalar). Typical examples of functionals are norms or semi-norms, such as the L2 or L1 norms.

Using LinearOperators, Residuals (e.g. LinearResidual) and a Functional (e.g. LeastSquares) enables the formulation of typical terms in an OptimizationProblem.

Author

  • Matthias Wieczorek - initial code

  • Maximilian Hornung - modularization

  • Tobias Lasser - rewrite

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Subclassed by elsa::ConditionalRicianLikelihood< data_t >, elsa::FunctionalScalarMul< data_t >, elsa::FunctionalSum< data_t >, elsa::IndicatorBox< data_t >, elsa::IndicatorNonNegativity< data_t >, elsa::OrthogonalComposition< data_t >, elsa::SeparableSum< data_t >, elsa::SphericalPositivity< data_t >, MockFunctional1< data_t >, MockFunctional2< data_t >, MockFunctional3< data_t >

Public Functions

Functional(const DataDescriptor &domainDescriptor)

Constructor for the functional, mapping a domain vector to a scalar (without a residual)

Parameters
  • [in] domainDescriptor: describing the domain of the functional

~Functional() override = default

default destructor

const DataDescriptor &getDomainDescriptor() const

return the domain descriptor

bool isDifferentiable() const

Indicate if a functional is differentiable. The default implementation returns false. Functionals which are at least once differentiable should override this functions.

bool isProxFriendly() const

Indicate if the functional has a simple to compute proximal.

bool hasProxDual() const

Indicate if the functional can compute the proximal of the dual.

data_t evaluate(const DataContainer<data_t> &x) const

evaluate the functional at x and return the result

Please note: after evaluating the residual at x, this method calls the method evaluateImpl that has to be overridden in derived classes to compute the functional’s value.

Return

result the scalar of the functional evaluated at x

Parameters

DataContainer<data_t> getGradient(const DataContainer<data_t> &x) const

compute the gradient of the functional at x and return the result

Please note: this method uses getGradient(x, result) to perform the actual operation.

Return

result DataContainer (in the domain of the functional) containing the result of the gradient at x.

Parameters

data_t convexConjugate(const DataContainer<data_t> &x) const

Compute the convex conjugate of the functional.

Parameters

void getGradient(const DataContainer<data_t> &x, DataContainer<data_t> &result) const

compute the gradient of the functional at x and store in result

Parameters
  • [in] x: input DataContainer (in the domain of the functional)

  • [out] result: output DataContainer (in the domain of the functional)

LinearOperator<data_t> getHessian(const DataContainer<data_t> &x) const

return the Hessian of the functional at x

Note: some derived classes might decide to use only the diagonal of the Hessian as a fast approximation!

Return

a LinearOperator (the Hessian)

Parameters

Please note: after evaluating the residual at x, this method calls the method getHessianImpl that has to be overridden in derived classes to compute the functional’s Hessian, and after that the chain rule for the residual is applied (if necessary).

DataContainer<data_t> proximal(const DataContainer<data_t> &v, SelfType_t<data_t> tau) const

compute the proximal of the given functional

Parameters
  • [in] v: input DataContainer (in the domain of the functional)

  • [in] tau: threshold/scaling parameter for proximal

void proximal(const DataContainer<data_t> &v, SelfType_t<data_t> t, DataContainer<data_t> &out) const

compute the proximal of the given functional and write the result to the output DataContainer

Parameters
  • [in] v: input DataContainer (in the domain of the functional)

  • [in] tau: threshold/scaling parameter for proximal

  • [out] out: output DataContainer (in the domain of the functional)

DataContainer<data_t> proxdual(const DataContainer<data_t> &x, SelfType_t<data_t> tau) const

compute the proximal of the convex conjugate of the functional. This method can either be overridden, or by default it computes the proximal of the convex conjugate using the Moreau’s identity. It is given as:

\[ \operatorname{prox}_{\tau f^*}(x) = x - \tau \operatorname{prox}_{\tau^{-1}f}(\tau^{-1} x) \]

void proxdual(const DataContainer<data_t> &x, SelfType_t<data_t> tau, DataContainer<data_t> &out) const

compute the proximal of the convex conjugate of the functional

Protected Functions

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

data_t evaluateImpl(const DataContainer<data_t> &Rx) const = 0

the evaluateImpl method that has to be overridden in derived classes

Please note: the evaluation of the residual is already performed in evaluate, so this method only has to compute the functional’s value itself.

Return

the evaluated functional

Parameters
  • [in] Rx: the residual evaluated at x

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const = 0

the getGradientImplt method that has to be overridden in derived classes

Please note: the evaluation of the residual is already performed in getGradient, as well as the application of the chain rule. This method here only has to compute the gradient of the functional itself, in an in-place manner (to avoid unnecessary DataContainers).

Parameters
  • [in] Rx: the value to evaluate the gradient of the functional

  • [inout] out: the evaluated gradient of the functional

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const = 0

the getHessianImpl method that has to be overridden in derived classes

Please note: the evaluation of the residual is already performed in getHessian, as well as the application of the chain rule. This method here only has to compute the Hessian of the functional itself.

Return

the LinearOperator representing the Hessian of the functional

Parameters
  • [in] Rx: the residual evaluated at x

Protected Attributes

std::unique_ptr<DataDescriptor> _domainDescriptor

the data descriptor of the domain of the functional

Composition of Functionals

template<class data_t>
class elsa::FunctionalSum : public elsa::Functional<data_t>

Class representing a sum of two functionals.

\[ f(x) = h(x) + g(x) \]

The gradient at $x$ is given as:

\[ \nabla f(x) = \nabla h(x) + \nabla g(x) \]

and finally the hessian is given by:

\[ \nabla^2 f(x) = \nabla^2 h(x) \nabla^2 g(x) \]

The gradient and hessian is only valid if the functional is (twice) differentiable. The operator+ is overloaded for, to conveniently create this class. It should not be necessary to create it explicitly.

Public Functions

FunctionalSum(const Functional<data_t> &lhs, const Functional<data_t> &rhs)

Construct from two functionals.

FunctionalSum(const FunctionalSum<data_t>&) = delete

Make deletion of copy constructor explicit.

FunctionalSum(FunctionalSum<data_t> &&other)

Default Move constructor.

FunctionalSum &operator=(const FunctionalSum<data_t>&) = delete

Make deletion of copy assignment explicit.

FunctionalSum &operator=(FunctionalSum<data_t> &&other) noexcept

Default Move assignment.

Private Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

evaluate the functional as $g(x) + h(x)$

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

evaluate the gradient as: $\nabla g(x) + \nabla h(x)$

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

construct the hessian as: $\nabla^2 g(x) + \nabla^2 h(x)$

FunctionalSum<data_t> *cloneImpl() const override

Implement polymorphic clone.

bool isEqual(const Functional<data_t> &other) const override

Implement polymorphic equality.

Private Members

std::unique_ptr<Functional<data_t>> lhs_ = {}

Store the left hand side functionl.

std::unique_ptr<Functional<data_t>> rhs_ = {}

Store the right hand side functional.

template<class data_t>
class elsa::FunctionalScalarMul : public elsa::Functional<data_t>

Class representing a functional with a scalar multiplication:

\[ f(x) = \lambda * g(x) \]

The gradient at $x$ is given as:

\[ \nabla f(x) = \lambda \nabla g(x) \]

and finally the hessian is given by:

\[ \nabla^2 f(x) = \lambda \nabla^2 g(x) \]

The gradient and hessian is only valid if the functional is differentiable. The operator* is overloaded for scalar values with functionals, to conveniently create this class. It should not be necessary to create it explicitly.

Public Functions

FunctionalScalarMul(const Functional<data_t> &fn, SelfType_t<data_t> scalar)

Construct functional from other functional and scalar.

FunctionalScalarMul(const FunctionalScalarMul<data_t>&) = delete

Make deletion of copy constructor explicit.

FunctionalScalarMul(FunctionalScalarMul<data_t> &&other)

Implement the move constructor.

FunctionalScalarMul &operator=(const FunctionalScalarMul<data_t>&) = delete

Make deletion of copy assignment explicit.

FunctionalScalarMul &operator=(FunctionalScalarMul<data_t> &&other) noexcept

Implement the move assignment operator.

~FunctionalScalarMul() override = default

Default destructor.

bool isProxFriendly() const override

Indicate if the functional has a simple to compute proximal.

data_t convexConjugate(const DataContainer<data_t> &x) const override

The convex conjugate of a scaled function $f(x) = \lambda g(x)$ is given as:

\[ f^*(x) = \lambda g^*(\frac{x}{\lambda}) \]

DataContainer<data_t> proximal(const DataContainer<data_t> &v, SelfType_t<data_t> t) const override

compute the proximal of the given functional

Parameters
  • [in] v: input DataContainer (in the domain of the functional)

  • [in] tau: threshold/scaling parameter for proximal

void proximal(const DataContainer<data_t> &v, SelfType_t<data_t> t, DataContainer<data_t> &out) const override

compute the proximal of the given functional and write the result to the output DataContainer

Parameters
  • [in] v: input DataContainer (in the domain of the functional)

  • [in] tau: threshold/scaling parameter for proximal

  • [out] out: output DataContainer (in the domain of the functional)

Private Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

Evaluate as $\lambda * \nabla g(x)$.

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

Evaluate gradient as: $\lambda * \nabla g(x)$.

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

Construct hessian as: $\lambda * \nabla^2 g(x)$.

FunctionalScalarMul<data_t> *cloneImpl() const override

Implementation of polymorphic clone.

bool isEqual(const Functional<data_t> &other) const override

Implementation of polymorphic equality.

Private Members

std::unique_ptr<Functional<data_t>> fn_ = {}

Store other functional $g$.

data_t scalar_

The scalar.

Loss functionals

Loss functionals are often used as data fidelity terms. They are specific versions of certain functionals, but are important enough to receive special attention.

LeastSquares

template<typename data_t = real_t>
class elsa::LeastSquares : public elsa::Functional<real_t>

The least squares functional / loss functional.

The least squares loss is given by: [ \frac{1}{2} || A(x) - b ||_2^2 ] i.e. the squared $\ell^2$ of the linear residual.

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

LeastSquares(const LinearOperator<data_t> &A, const DataContainer<data_t> &b)

Constructor the l2 norm (squared) functional with a LinearResidual.

Parameters
  • [in] A: LinearOperator to use in the residual

  • [in] b: data to use in the linear residual

LeastSquares(const LeastSquares<data_t>&) = delete

make copy constructor deletion explicit

~LeastSquares() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the l2 norm (squared)

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

LeastSquares<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

WeightedLeastSquares

template<typename data_t = real_t>
class elsa::WeightedLeastSquares : public elsa::Functional<real_t>

The least squares functional / loss functional.

The least squares loss is given by: [ \frac{1}{2} || A(x) - b ||_2^2 ] i.e. the squared $\ell^2$ of the linear residual.

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

WeightedLeastSquares(const LinearOperator<data_t> &A, const DataContainer<data_t> &b, const DataContainer<data_t> &weights)

Constructor the l2 norm (squared) functional with a LinearResidual.

Parameters
  • [in] A: LinearOperator to use in the residual

  • [in] b: data to use in the linear residual

WeightedLeastSquares(const WeightedLeastSquares<data_t>&) = delete

make copy constructor deletion explicit

~WeightedLeastSquares() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the l2 norm (squared)

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

WeightedLeastSquares<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

EmissionLogLikelihood

template<typename data_t = real_t>
class elsa::EmissionLogLikelihood : public elsa::Functional<real_t>

Class representing a negative log-likelihood functional for emission tomography.

The EmissionLogLikelihood functional evaluates as $ \sum_{i=1}^n (x_i + r_i) - y_i\log(x_i + r_i) $, with $ y=(y_i) $ denoting the measurements, $ r=(r_i) $ denoting the mean number of background events, and $ x=(x_i) $.

Typically, $ x $ is wrapped in a LinearResidual without a data vector, i.e. $ x \mapsto Ax $.

Author

  • Matthias Wieczorek - initial code

  • Maximilian Hornung - modularization

  • Tobias Lasser - rewrite

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

EmissionLogLikelihood(const LinearOperator<data_t> &A, const DataContainer<data_t> &y)

Constructor for emission log-likelihood, using only y, and a residual as input.

Parameters
  • [in] residual: to be used when evaluating the functional (or its derivative)

  • [in] y: the measurement data vector

EmissionLogLikelihood(const LinearOperator<data_t> &A, const DataContainer<data_t> &y, const DataContainer<data_t> &r)

Constructor for emission log-likelihood, using y and r, and a residual as input.

Parameters
  • [in] residual: to be used when evaluating the functional (or its derivative)

  • [in] y: the measurement data vector

  • [in] r: the background event data vector

EmissionLogLikelihood(const EmissionLogLikelihood<data_t>&) = delete

make copy constructor deletion explicit

~EmissionLogLikelihood() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the emission log-likelihood

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

EmissionLogLikelihood<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

std::unique_ptr<LinearOperator<data_t>> A_ = {}

optional linear operator to apply to x

DataContainer<data_t> y_

the measurement data vector y

std::optional<DataContainer<data_t>> r_ = {}

the background event data vector r

TransmissionLogLikelihood

template<typename data_t = real_t>
class elsa::TransmissionLogLikelihood : public elsa::Functional<real_t>

Class representing a negative log-likelihood functional for transmission tomography.

The

TransmissionLogLikelihood functional evaluates as $ \sum_{i=1}^n (b_i \exp(-x_i) + r_i) - y_i\log(b_i \exp(-x_i) + r_i) $, with $ b=(b_i) $ denoting the mean number of photons per detector (blank scan), $ y=(y_i) $ denoting the measurements, $ r=(r_i) $ denoting the mean number of background events, and $ x=(x_i) $.
Author

Matthias Wieczorek - initial code

Author

Maximilian Hornung - modularization

Author

Tobias Lasser - rewrite

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Typically, $ x $ is wrapped in a LinearResidual without a data vector, i.e. $ x \mapsto Ax $.

Public Functions

TransmissionLogLikelihood(const LinearOperator<data_t> &A, const DataContainer<data_t> &y, const DataContainer<data_t> &b)

Constructor for transmission log-likelihood, using y and b, and a residual as input.

Parameters
  • [in] A: linear operator to apply to x

  • [in] y: the measurement data vector

  • [in] b: the blank scan data vector

TransmissionLogLikelihood(const LinearOperator<data_t> &A, const DataContainer<data_t> &y, const DataContainer<data_t> &b, const DataContainer<data_t> &r)

Constructor for transmission log-likelihood, using y, b, and r, and a residual as input.

Parameters
  • [in] A: linear operator to apply to x

  • [in] y: the measurement data vector

  • [in] b: the blank scan data vector

  • [in] r: the background event data vector

TransmissionLogLikelihood(const TransmissionLogLikelihood<data_t>&) = delete

make copy constructor deletion explicit

~TransmissionLogLikelihood() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the transmission log-likelihood

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

TransmissionLogLikelihood<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

std::unique_ptr<LinearOperator<data_t>> A_ = {}

optional linear operator to apply to x

DataContainer<data_t> y_

the measurement data vector y

DataContainer<data_t> b_

the blank scan data vector b

std::optional<DataContainer<data_t>> r_ = {}

the background event data vector r

L1Loss

template<typename data_t = real_t>
class elsa::L1Loss : public elsa::Functional<real_t>

L1 loss functional modeling a laplacian noise distribution.

The least squares loss is given by: [ || A(x) - b ||_1 ] i.e. the $\ell^1$ norm of the linear residual / error.

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

L1Loss(const LinearOperator<data_t> &A, const DataContainer<data_t> &b)

Constructor the l1 loss functional.

Parameters
  • [in] A: LinearOperator to use in the residual

  • [in] b: data to use in the linear residual

L1Loss(const L1Loss<data_t>&) = delete

make copy constructor deletion explicit

~L1Loss() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the l1 loss

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

L1Loss<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

Norms

Norms are another subclass of functionals. A norm is a functional with the additional properties, where $f : X \to \mathbb{R}$ some functional, $x \text{ and } y \in X$ elements of the domain, and $s \in \mathbb{R}$ a scalar: * the triangle inequality holds, i.e. $f(x + y) \leq f(x) + f(y)$. * $f(sx) = |s| f(x)$ for all $x \in X$. * $f(x) = 0$ if and only if $x = 0$.

From these it also holds, that the result of a norm is always non-negative.

L1Norm

template<typename data_t = real_t>
class elsa::L1Norm : public elsa::Functional<real_t>

Class representing the l1 norm functional.

The l1 norm functional evaluates to $ \sum_{i=1}^n |x_i| $ for $ x=(x_i)_{i=1}^n $. Please note that it is not differentiable, hence getGradient and getHessian will throw exceptions.

Author

  • Matthias Wieczorek - initial code

  • Maximilian Hornung - modularization

  • Tobias Lasser - modernization

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

L1Norm(const DataDescriptor &domainDescriptor)

Constructor for the l1 norm functional, mapping domain vector to a scalar (without a residual)

Parameters
  • [in] domainDescriptor: describing the domain of the functional

L1Norm(const L1Norm<data_t>&) = delete

make copy constructor deletion explicit

~L1Norm() override = default

default destructor

data_t convexConjugate(const DataContainer<data_t> &x) const override

The convex convex conjugate of the l1 norm is the indicator of the L-infinity norm.

\[ \mathbb{I}_{\{\|\cdot\|_{\infty} \leq 1\}}(x) = \begin{cases} 0, & \text{if } \|x\|_{\infty} \leq 1 \\ \infty, && \text{otherwise} \end{cases} \]

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the l1 norm

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

L1Norm<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

WeightedL1Norm

template<typename data_t = real_t>
class elsa::WeightedL1Norm : public elsa::Functional<real_t>

Class representing a weighted l1 norm functional.

The weighted l1 norm functional evaluates to

$ \| x \|_{w,1} = \sum_{i=1}^n w_{i} * |x_{i}| $ where $ w_{i} >= 0 $.
Author

Andi Braimllari - initial code

Template Parameters
  • data_t: data type for the domain of the functional, defaulting to real_t

Public Functions

WeightedL1Norm(const DataContainer<data_t> &weightingOp)

Constructor for the weighted l1 norm, mapping domain vector to a scalar (without a residual)

Parameters
  • [in] weightingOp: container of the weights

WeightedL1Norm(const WeightedL1Norm<data_t>&) = delete

make copy constructor deletion explicit

~WeightedL1Norm() override = default

default destructor

const DataContainer<data_t> &getWeightingOperator() const

returns the weighting operator

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the weighted l1 norm

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

WeightedL1Norm<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

DataContainer<data_t> _weightingOp

the weighting operator

L2Squared

template<typename data_t = real_t>
class elsa::L2Squared : public elsa::Functional<real_t>

Class representing the squared l2 norm functional.

The l2 norm (squared) functional evaluates to $ 0.5 * \sum_{i=1}^n x_i^2 $ for $ x=(x_i)_{i=1}^n $.

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

L2Squared(const DataDescriptor &domainDescriptor)

Constructor for the l2 norm (squared) functional, mapping domain vector to a scalar (without a residual)

Parameters
  • [in] domainDescriptor: describing the domain of the functional

L2Squared(const DataContainer<data_t> &b)

Constructor the l2 norm (squared) functional with a LinearResidual.

Parameters
  • [in] domainDescriptor: describing the domain of the functional

  • [in] b: data to use in the linear residual

L2Squared(const L2Squared<data_t>&) = delete

make copy constructor deletion explicit

~L2Squared() override = default

default destructor

DataContainer<data_t> proximal(const DataContainer<data_t> &x, SelfType_t<data_t> tau) const override

The proximal of the squared L2 norm is given as:

\[ \operatorname{prox}_{\tau f}(x) = \frac{x + 2 \tau b}{1 + 2 \tau} \]

data_t convexConjugate(const DataContainer<data_t> &x) const override

The convex conjugate for the squared l2 norm is given as:

  • $ f(x)^* = \frac{1]{4} ||x||_2^2 $, if no translation is present

  • $ f(x)^* = \frac{1]{4} ||x||_2^2 + \langle x, b \rangle $, if a translation is present

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the l2 norm (squared)

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

L2Squared<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

L2Reg

template<typename data_t = real_t>
class elsa::L2Reg : public elsa::Functional<real_t>

Class representing a L2 regularization term with an optional linear operator.

This functional evaluates to $ 0.5 * || A(x) ||_2^2 $. The L2Reg should be used if an L2Squared norm is not sufficient as an linear operator is necessary.

Note, that the proximal operator is not analytically computable in this case.

See

L2Squared

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

L2Reg(const DataDescriptor &domain)

Constructor the l2 regularization functional without data and linear operator, i.e. $A = I$ and $b = 0$.

Parameters
  • [in] domainDescriptor: describing the domain of the functional

L2Reg(const LinearOperator<data_t> &A)

Constructor the l2 regularization functional with an linear operator.

Parameters
  • [in] A: linear operator to be used

L2Reg(const L2Reg<data_t>&) = delete

make copy constructor deletion explicit

~L2Reg() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the l2 norm (squared)

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

L2Reg<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

LInfNorm

template<typename data_t = real_t>
class elsa::LInfNorm : public elsa::Functional<real_t>

Class representing the maximum norm functional (l infinity).

The linf / max norm functional evaluates to

$ \max_{i=1,\ldots,n} |x_i| $ for $ x=(x_i)_{i=1}^n $. Please note that it is not differentiable, hence getGradient and getHessian will throw exceptions.
Author

Matthias Wieczorek - initial code

Author

Maximilian Hornung - modularization

Author

Tobias Lasser - modernization

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

LInfNorm(const DataDescriptor &domainDescriptor)

Constructor for the linf norm functional, mapping domain vector to scalar (without a residual)

Parameters
  • [in] domainDescriptor: describing the domain of the functional

LInfNorm(const LInfNorm<data_t>&) = delete

make copy constructor deletion explicit

~LInfNorm() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the linf norm

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

LInfNorm<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

L0PseudoNorm

template<typename data_t = real_t>
class elsa::L0PseudoNorm : public elsa::Functional<real_t>

Class representing the l0 pseudo-norm functional.

The l0 pseudo-norm functional evaluates to $ \sum_{i=1}^n 1_{x_{i} \neq 0} $ for $ x=(x_i)_{i=1}^n $. Please note that it is not differentiable, hence getGradient and getHessian will throw exceptions.

References:

Author

Andi Braimllari - initial code

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

L0PseudoNorm(const DataDescriptor &domainDescriptor)

Constructor for the l0 pseudo-norm functional, mapping domain vector to a scalar (without a residual)

Parameters
  • [in] domainDescriptor: describing the domain of the functional

L0PseudoNorm(const L0PseudoNorm<data_t>&) = delete

make copy constructor deletion explicit

~L0PseudoNorm() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the l0 pseudo-norm

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t>&) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

auto cloneImpl() const -> L0PseudoNorm<data_t>* override

implement the polymorphic clone operation

auto isEqual(const Functional<data_t> &other) const -> bool override

implement the polymorphic comparison operation

Other functionals

Huber

template<typename data_t = real_t>
class elsa::Huber : public elsa::Functional<real_t>

Class representing the Huber loss.

The Huber loss evaluates to $ \sum_{i=1}^n \begin{cases} \frac{1}{2} x_i^2 & \text{for } |x_i| \leq \delta \\ \delta\left(|x_i| - \frac{1}{2}\delta\right) & \text{else} \end{cases} $ for $ x=(x_i)_{i=1}^n $ and a cut-off parameter $ \delta $.

Reference: https://doi.org/10.1214%2Faoms%2F1177703732

Author

  • Matthias Wieczorek - initial code

  • Maximilian Hornung - modularization

  • Tobias Lasser - modernization

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

Huber(const DataDescriptor &domainDescriptor, real_t delta = static_cast<real_t>(1e-6))

Constructor for the Huber functional, mapping domain vector to scalar (without a residual)

Parameters
  • [in] domainDescriptor: describing the domain of the functional

  • [in] delta: parameter for linear/square cutoff (defaults to 1e-6)

Huber(const Huber<data_t>&) = delete

make copy constructor deletion explicit

~Huber() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the Huber loss

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

Huber<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

data_t delta_

the cut-off delta

PseudoHuber

template<typename data_t = real_t>
class elsa::PseudoHuber : public elsa::Functional<real_t>

Class representing the Pseudohuber norm.

The Pseudohuber norm evaluates to

$ \sum_{i=1}^n \delta \left( \sqrt{1 + (x_i / \delta)^2} - 1 \right) $ for $ x=(x_i)_{i=1}^n $ and a slope parameter $ \delta $.
Author

Matthias Wieczorek - initial code

Author

Maximilian Hornung - modularization

Author

Tobias Lasser - modernization, fixes

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Reference: https://doi.org/10.1109%2F83.551699

Public Functions

PseudoHuber(const DataDescriptor &domainDescriptor, real_t delta = static_cast<real_t>(1))

Constructor for the Pseudohuber functional, mapping domain vector to scalar (without a residual)

Parameters
  • [in] domainDescriptor: describing the domain of the functional

  • [in] delta: parameter for linear slope (defaults to 1)

PseudoHuber(const PseudoHuber<data_t>&) = delete

make copy constructor deletion explicit

~PseudoHuber() override = default

default destructor

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the Huber norm

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

PseudoHuber<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

data_t delta_

the slope delta

Quadric

template<typename data_t = real_t>
class elsa::Quadric : public elsa::Functional<real_t>

Class representing a quadric functional.

The Quadric functional evaluates to $ \frac{1}{2} x^tAx - x^tb $ for a symmetric positive definite operator A and a vector b.

Please note: contrary to other functionals, Quadric does not allow wrapping an explicit residual.

Author

  • Matthias Wieczorek - initial code

  • Maximilian Hornung - modularization

  • Tobias Lasser - modernization

  • Nikola Dinev - add functionality

Template Parameters
  • data_t: data type for the domain of the residual of the functional, defaulting to real_t

Public Functions

Quadric(const LinearOperator<data_t> &A, const DataContainer<data_t> &b)

Constructor for the Quadric functional, using operator A and vector b (no residual).

Parameters
  • [in] A: the operator (has to be symmetric positive definite)

  • [in] b: the data vector

Quadric(const LinearOperator<data_t> &A)

Constructor for the Quadric functional $ \frac{1}{2} x^tAx $ (trivial data vector)

Parameters
  • [in] A: the operator (has to be symmetric positive definite)

Quadric(const DataContainer<data_t> &b)

Constructor for the Quadric functional $ \frac{1}{2} x^tx - x^tb $ (trivial operator)

Parameters
  • [in] b: the data vector

Quadric(const DataDescriptor &domainDescriptor)

Constructor for the Quadric functional $ \frac{1}{2} x^tx $ (trivial operator and data vector)

Parameters
  • [in] domainDescriptor: the descriptor of x

Quadric(const Quadric<data_t>&) = delete

make copy constructor deletion explicit

~Quadric() override = default

default destructor

const LinearResidual<data_t> &getGradientExpression() const

returns the residual $ Ax - b $, which also corresponds to the gradient of the functional

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

the evaluation of the Quadric functional

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

the computation of the gradient (in place)

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

the computation of the Hessian

Quadric<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

LinearResidual<data_t> linResidual_

storing A,b in a linear residual

ConstantFunctional

template<typename data_t = real_t>
class elsa::ConstantFunctional : public elsa::Functional<real_t>

Constant functional. This functinoal maps all input values to a constant scalar value.

Public Functions

ConstantFunctional(const DataDescriptor &descriptor, SelfType_t<data_t> constant)

Constructor for the constant functional, mapping domain vector to a scalar (without a residual)

ConstantFunctional(const ConstantFunctional<data_t>&) = delete

make copy constructor deletion explicit

~ConstantFunctional() override = default

default destructor

data_t getConstant() const

Return the constant of the functional.

data_t convexConjugate(const DataContainer<data_t> &x) const override

The convex conjugate for the constant or zero functional is

\[ f^*(x) = \begin{cases} -c, & \text{if } x = 0 \\ \infty, & \text{otherwise} \end{cases} \]
However, in algorithms like PDHG, this usually results in inf values, which is not desirable. Hence, the following penalisation is used:
\[ f^*(x) = \sum \max(x, 0) \]

DataContainer<data_t> proximal(const DataContainer<data_t> &v, [[maybe_unused]] SelfType_t<data_t> t) const override

The proximal for any constant function is simply the identity.

void proximal(const DataContainer<data_t> &v, [[maybe_unused]] SelfType_t<data_t> t, DataContainer<data_t> &out) const override

The proximal for any constant function is simply the identity.

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

Return the constant value.

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

The gradient operator is the ZeroOperator, hence set Rx to 0.

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

There does not exist a hessian, this will throw if called.

ConstantFunctional<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

template<typename data_t = real_t>
class elsa::ZeroFunctional : public elsa::Functional<real_t>

Zero functional. This functinoal maps all input values to a zero.

Public Functions

ZeroFunctional(const DataDescriptor &descriptor)

Constructor for the zero functional, mapping domain vector to a scalar (without a residual)

ZeroFunctional(const ConstantFunctional<data_t>&) = delete

make copy constructor deletion explicit

~ZeroFunctional() override = default

default destructor

data_t convexConjugate(const DataContainer<data_t> &x) const override

The convex conjugate for the constant or zero functional is

\[ f^*(x) = \begin{cases} -c, & \text{if } x = 0 \\ \infty, & \text{otherwise} \end{cases} \]
However, in algorithms like PDHG, this usually results in inf values, which is not desirable. Hence, the following penalisation is used:
\[ f^*(x) = \sum \max(x, 0) \]

DataContainer<data_t> proximal(const DataContainer<data_t> &v, [[maybe_unused]] SelfType_t<data_t> t) const override

The proximal for any constant function is simply the identity.

void proximal(const DataContainer<data_t> &v, [[maybe_unused]] SelfType_t<data_t> t, DataContainer<data_t> &out) const override

The proximal for any constant function is simply the identity.

Protected Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

Return the constant value.

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

The gradient operator is the ZeroOperator, hence set Rx to 0.

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

There does not exist a hessian, this will throw if called.

ZeroFunctional<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Functional<data_t> &other) const override

implement the polymorphic comparison operation

Indicator Functionals

template<class data_t>
class elsa::IndicatorBox : public elsa::Functional<data_t>

Indicator function for some box shaped set.

The indicator function with the lower bound $a$ and the upper bound $b$ is given by:

\[ f(x) = \begin{cases} 0 & \text{if } a \leq x \leq b \text{ everywhere}, \\ \infty & \text{else} \end{cases} \]

Public Functions

IndicatorBox(const DataDescriptor &desc)

Construct indicator function with $-\infty$ and $\infty$ bounds.

IndicatorBox(const DataDescriptor &desc, SelfType_t<data_t> lower, SelfType_t<data_t> upper)

Construct indicator function with given bounds.

bool isProxFriendly() const override

Indicate if the functional has a simple to compute proximal.

data_t convexConjugate(const DataContainer<data_t> &x) const override

Compute the convex conjugate of the functional.

Parameters

Private Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

Evaluate the functional.

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t>&) const override

The gradient functions throws, the indicator function has no gradient.

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

The gradient functions throws, the indicator function has no hessian.

IndicatorBox<data_t> *cloneImpl() const override

Implementation of polymorphic clone.

bool isEqual(const Functional<data_t> &other) const override

Implementation of polymorphic equality.

Private Members

data_t lower_ = -std::numeric_limits<data_t>::infinity()

Lower bound.

data_t upper_ = std::numeric_limits<data_t>::infinity()

Upper bound.

template<class data_t>
class elsa::IndicatorNonNegativity : public elsa::Functional<data_t>

Indicator function for the set of non-negative numbers.

The nonnegativity indicator for the set of non-negative numbers is defined as:

\[ f(x) = \begin{cases} 0 & \text{if } 0 \leq x \text{ everywhere}, \\ \infty & \text{else} \end{cases} \]

Public Functions

IndicatorNonNegativity(const DataDescriptor &desc)

Construct non-negativity indicator functional.

bool isProxFriendly() const override

Indicate if the functional has a simple to compute proximal.

data_t convexConjugate(const DataContainer<data_t> &x) const override

Compute the convex conjugate of the functional.

Parameters

Private Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

Evaluate the functional.

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t>&) const override

The gradient functions throws, the indicator function has no gradient.

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

The gradient functions throws, the indicator function has no hessian.

IndicatorNonNegativity<data_t> *cloneImpl() const override

Implementation of polymorphic clone.

bool isEqual(const Functional<data_t> &other) const override

Implementation of polymorphic equality.

SeparableSum

template<class data_t>
class elsa::SeparableSum : public elsa::Functional<data_t>

Class representing a separable sum of functionals. Given a sequence of $k$ functions $ ( f_i )_{i=1}^k $, where $f_{i}: X_{i} \rightarrow (-\infty, \infty]$, the separable sum $F$ is defined as:

\[ F:X_{1}\times X_{2}\cdots\times X_{m} \rightarrow (-\infty, \infty] \\ F(x_{1}, x_{2}, \cdots, x_{k}) = \sum_{i=1}^k f_{i}(x_{i}) \]

The great benefit of the separable sum, is that its proximal is easily derived.

See

CombinedProximal

Public Functions

SeparableSum(std::vector<std::unique_ptr<Functional<data_t>>> fns)

Create a separable sum from a vector of unique_ptrs to functionals.

SeparableSum(const Functional<data_t> &fn)

Create a separable sum from a single functional.

SeparableSum(const Functional<data_t> &fn1, const Functional<data_t> &fn2)

Create a separable sum from two functionals.

SeparableSum(const Functional<data_t> &fn1, const Functional<data_t> &fn2, const Functional<data_t> &fn3)

Create a separable sum from three functionals.

template<class ...Args>
SeparableSum(const Functional<data_t> &fn1, const Functional<data_t> &fn2, const Functional<data_t> &fn3, const Functional<data_t> &fn4, Args&&... fns)

Create a separable sum from variadic number of functionals.

bool isProxFriendly() const override

Indicate if the functional has a simple to compute proximal.

DataContainer<data_t> proximal(const DataContainer<data_t> &v, SelfType_t<data_t> t) const override

compute the proximal of the given functional

Parameters
  • [in] v: input DataContainer (in the domain of the functional)

  • [in] tau: threshold/scaling parameter for proximal

void proximal(const DataContainer<data_t> &v, SelfType_t<data_t> t, DataContainer<data_t> &out) const override

compute the proximal of the given functional and write the result to the output DataContainer

Parameters
  • [in] v: input DataContainer (in the domain of the functional)

  • [in] tau: threshold/scaling parameter for proximal

  • [out] out: output DataContainer (in the domain of the functional)

data_t convexConjugate(const DataContainer<data_t> &x) const override

The convex conjugate of a separable sum is given as:

\[ f^*(x) = \sum_{i=0}^m f_i^*(x_i) \]

Private Functions

data_t evaluateImpl(const DataContainer<data_t> &Rx) const override

Evaluate functional. Requires Rx to be a blocked DataContainer (i.e. its descriptor is of type BlockDescriptor), the functions throws if not meet.

void getGradientImpl(const DataContainer<data_t> &Rx, DataContainer<data_t> &out) const override

The derivative of the sum of functions, is the sum of the derivatives.

LinearOperator<data_t> getHessianImpl(const DataContainer<data_t> &Rx) const override

Not yet implemented.

SeparableSum<data_t> *cloneImpl() const override

Polymorphic clone implementations.

bool isEqual(const Functional<data_t> &other) const override

Polymorphic equalty implementations.