Skip to content

pydvl.influence.torch.preconditioner

Preconditioner

Bases: ABC

Abstract base class for implementing pre-conditioners for improving the convergence of CG for systems of the form

\[ ( A + \lambda \operatorname{I})x = \operatorname{rhs} \]

i.e. a matrix \(M\) such that \(M^{-1}(A + \lambda \operatorname{I})\) has a better condition number than \(A + \lambda \operatorname{I}\).

fit

fit(operator: 'TensorOperator', regularization: Optional[float] = None)

Implement this to fit the pre-conditioner to the matrix represented by the mat_mat_prod Args: operator: The preconditioner is fitted to this operator regularization: regularization parameter \(\lambda\) in the equation $ ( A + \lambda \operatorname{I})x = \operatorname{rhs} $ Returns: self

Source code in src/pydvl/influence/torch/preconditioner.py
def fit(
    self,
    operator: "TensorOperator",
    regularization: Optional[float] = None,
):
    r"""
    Implement this to fit the pre-conditioner to the matrix represented by the
    mat_mat_prod
    Args:
        operator: The preconditioner is fitted to this operator
        regularization: regularization parameter $\lambda$ in the equation
            $ ( A + \lambda \operatorname{I})x = \operatorname{rhs} $
    Returns:
        self
    """
    self._validate_regularization(regularization)
    return self._fit(operator, regularization)

solve

solve(rhs: Tensor)

Solve the equation \(M@Z = \operatorname{rhs}\) Args: rhs: right hand side of the equation, corresponds to the residuum vector (or matrix) in the conjugate gradient method

RETURNS DESCRIPTION

solution \(M^{-1}\operatorname{rhs}\)

Source code in src/pydvl/influence/torch/preconditioner.py
def solve(self, rhs: torch.Tensor):
    r"""
    Solve the equation $M@Z = \operatorname{rhs}$
    Args:
        rhs: right hand side of the equation, corresponds to the residuum vector
            (or matrix) in the conjugate gradient method

    Returns:
        solution $M^{-1}\operatorname{rhs}$

    """
    if not self.is_fitted:
        raise NotFittedException(type(self))

    return self._solve(rhs)

to abstractmethod

to(device: device) -> Preconditioner

Implement this to move the (potentially fitted) preconditioner to a specific device

Source code in src/pydvl/influence/torch/preconditioner.py
@abstractmethod
def to(self, device: torch.device) -> Preconditioner:
    """Implement this to move the (potentially fitted) preconditioner to a
    specific device"""

JacobiPreconditioner

JacobiPreconditioner(num_samples_estimator: int = 1)

Bases: Preconditioner

Pre-conditioner for improving the convergence of CG for systems of the form

\[ ( A + \lambda \operatorname{I})x = \operatorname{rhs} \]

The JacobiPreConditioner uses the diagonal information of the matrix \(A\). The diagonal elements are not computed directly but estimated via Hutchinson's estimator.

\[ M = \frac{1}{m} \sum_{i=1}^m u_i \odot Au_i + \lambda \operatorname{I} \]

where \(u_i\) are i.i.d. Gaussian random vectors. Works well in the case the matrix \(A + \lambda \operatorname{I}\) is diagonal dominant. For more information, see the documentation of Conjugate Gradient Args: num_samples_estimator: number of samples to use in computation of Hutchinson's estimator

Source code in src/pydvl/influence/torch/preconditioner.py
def __init__(self, num_samples_estimator: int = 1):
    self.num_samples_estimator = num_samples_estimator

fit

fit(operator: 'TensorOperator', regularization: Optional[float] = None)

Implement this to fit the pre-conditioner to the matrix represented by the mat_mat_prod Args: operator: The preconditioner is fitted to this operator regularization: regularization parameter \(\lambda\) in the equation $ ( A + \lambda \operatorname{I})x = \operatorname{rhs} $ Returns: self

Source code in src/pydvl/influence/torch/preconditioner.py
def fit(
    self,
    operator: "TensorOperator",
    regularization: Optional[float] = None,
):
    r"""
    Implement this to fit the pre-conditioner to the matrix represented by the
    mat_mat_prod
    Args:
        operator: The preconditioner is fitted to this operator
        regularization: regularization parameter $\lambda$ in the equation
            $ ( A + \lambda \operatorname{I})x = \operatorname{rhs} $
    Returns:
        self
    """
    self._validate_regularization(regularization)
    return self._fit(operator, regularization)

solve

solve(rhs: Tensor)

Solve the equation \(M@Z = \operatorname{rhs}\) Args: rhs: right hand side of the equation, corresponds to the residuum vector (or matrix) in the conjugate gradient method

RETURNS DESCRIPTION

solution \(M^{-1}\operatorname{rhs}\)

Source code in src/pydvl/influence/torch/preconditioner.py
def solve(self, rhs: torch.Tensor):
    r"""
    Solve the equation $M@Z = \operatorname{rhs}$
    Args:
        rhs: right hand side of the equation, corresponds to the residuum vector
            (or matrix) in the conjugate gradient method

    Returns:
        solution $M^{-1}\operatorname{rhs}$

    """
    if not self.is_fitted:
        raise NotFittedException(type(self))

    return self._solve(rhs)

NystroemPreconditioner

NystroemPreconditioner(rank: int)

Bases: Preconditioner

Pre-conditioner for improving the convergence of CG for systems of the form

\[ (A + \lambda \operatorname{I})x = \operatorname{rhs} \]

The NystroemPreConditioner computes a low-rank approximation

\[ A_{\text{nys}} = (A \Omega)(\Omega^T A \Omega)^{\dagger}(A \Omega)^T = U \Sigma U^T, \]

where \((\cdot)^{\dagger}\) denotes the Moore-Penrose inverse, and uses the matrix

\[ M^{-1} = (\lambda + \sigma_{\text{rank}})U(\Sigma+ \lambda \operatorname{I})^{-1}U^T+(\operatorname{I} - UU^T) \]

for pre-conditioning, where \( \sigma_{\text{rank}} \) is the smallest eigenvalue of the low-rank approximation.

Source code in src/pydvl/influence/torch/preconditioner.py
def __init__(self, rank: int):
    self._rank = rank

fit

fit(operator: 'TensorOperator', regularization: Optional[float] = None)

Implement this to fit the pre-conditioner to the matrix represented by the mat_mat_prod Args: operator: The preconditioner is fitted to this operator regularization: regularization parameter \(\lambda\) in the equation $ ( A + \lambda \operatorname{I})x = \operatorname{rhs} $ Returns: self

Source code in src/pydvl/influence/torch/preconditioner.py
def fit(
    self,
    operator: "TensorOperator",
    regularization: Optional[float] = None,
):
    r"""
    Implement this to fit the pre-conditioner to the matrix represented by the
    mat_mat_prod
    Args:
        operator: The preconditioner is fitted to this operator
        regularization: regularization parameter $\lambda$ in the equation
            $ ( A + \lambda \operatorname{I})x = \operatorname{rhs} $
    Returns:
        self
    """
    self._validate_regularization(regularization)
    return self._fit(operator, regularization)

solve

solve(rhs: Tensor)

Solve the equation \(M@Z = \operatorname{rhs}\) Args: rhs: right hand side of the equation, corresponds to the residuum vector (or matrix) in the conjugate gradient method

RETURNS DESCRIPTION

solution \(M^{-1}\operatorname{rhs}\)

Source code in src/pydvl/influence/torch/preconditioner.py
def solve(self, rhs: torch.Tensor):
    r"""
    Solve the equation $M@Z = \operatorname{rhs}$
    Args:
        rhs: right hand side of the equation, corresponds to the residuum vector
            (or matrix) in the conjugate gradient method

    Returns:
        solution $M^{-1}\operatorname{rhs}$

    """
    if not self.is_fitted:
        raise NotFittedException(type(self))

    return self._solve(rhs)