pydvl.influence.torch.util
¶
TorchTensorContainerType
module-attribute
¶
Type for a PyTorch tensor or a container thereof.
TorchNumpyConverter
¶
Bases: NumpyConverter[Tensor]
Helper class for converting between torch.Tensor and numpy.ndarray
PARAMETER | DESCRIPTION |
---|---|
device |
Optional device parameter to move the resulting torch tensors to the specified device |
Source code in src/pydvl/influence/torch/util.py
to_numpy
¶
Convert a detached torch.Tensor to numpy.ndarray
from_numpy
¶
Convert a numpy.ndarray to torch.Tensor and optionally move it to a provided device
Source code in src/pydvl/influence/torch/util.py
TorchCatAggregator
¶
Bases: SequenceAggregator[Tensor]
An aggregator that concatenates tensors using PyTorch's torch.cat function. Concatenation is done along the first dimension of the chunks.
__call__
¶
__call__(tensor_sequence: LazyChunkSequence[Tensor])
Aggregates tensors from a single-level generator into a single tensor by concatenating them. This method is a straightforward way to combine a sequence of tensors into one larger tensor.
PARAMETER | DESCRIPTION |
---|---|
tensor_sequence |
Object wrapping a generator that yields
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
A single tensor formed by concatenating all tensors from the generator. The concatenation is performed along the default dimension (0). |
Source code in src/pydvl/influence/torch/util.py
NestedTorchCatAggregator
¶
Bases: NestedSequenceAggregator[Tensor]
An aggregator that concatenates tensors using PyTorch's torch.cat function. Concatenation is done along the first two dimensions of the chunks.
__call__
¶
__call__(nested_sequence_of_tensors: NestedLazyChunkSequence[Tensor])
Aggregates tensors from a nested generator structure into a single tensor by concatenating. Each inner generator is first concatenated along dimension 1 into a tensor, and then these tensors are concatenated along dimension 0 together to form the final tensor.
PARAMETER | DESCRIPTION |
---|---|
nested_sequence_of_tensors |
Object wrapping a generator of generators,
where each inner generator yields
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
A single tensor formed by concatenating all tensors from the nested |
|
generators. |
Source code in src/pydvl/influence/torch/util.py
EkfacRepresentation
dataclass
¶
EkfacRepresentation(
layer_names: Iterable[str],
layers_module: Iterable[Module],
evecs_a: Iterable[Tensor],
evecs_g: Iterable[Tensor],
diags: Iterable[Tensor],
)
Container class for the EKFAC representation of the Hessian. It can be iterated over to get the layers names and their corresponding module, eigenvectors and diagonal elements of the factorized Hessian matrix.
PARAMETER | DESCRIPTION |
---|---|
layer_names |
Names of the layers. |
layers_module |
The layers. |
evecs_a |
The a eigenvectors of the ekfac representation. |
evecs_g |
The g eigenvectors of the ekfac representation. |
diags |
The diagonal elements of the factorized Hessian matrix. |
get_layer_evecs
¶
It returns two dictionaries, one for the a eigenvectors and one for the g eigenvectors, with the layer names as keys. The eigenvectors are in the same order as the layers in the model.
Source code in src/pydvl/influence/torch/util.py
TorchLinalgEighException
¶
TorchLinalgEighException(original_exception: RuntimeError)
Bases: Exception
Exception to wrap a RunTimeError raised by torch.linalg.eigh, when used with large matrices, see https://github.com/pytorch/pytorch/issues/92141
Source code in src/pydvl/influence/torch/util.py
to_model_device
¶
Returns the tensor x
moved to the device of the model
, if device of model is set
PARAMETER | DESCRIPTION |
---|---|
x |
The tensor to be moved to the device of the model.
TYPE:
|
model |
The model whose device will be used to move the tensor.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
The tensor |
Source code in src/pydvl/influence/torch/util.py
reshape_vector_to_tensors
¶
reshape_vector_to_tensors(
input_vector: Tensor, target_shapes: Iterable[Tuple[int, ...]]
) -> Tuple[Tensor, ...]
Reshape a 1D tensor into multiple tensors with specified shapes.
This function takes a 1D tensor (input_vector) and reshapes it into a series of tensors with shapes given by 'target_shapes'. The reshaped tensors are returned as a tuple in the same order as their corresponding shapes.
Note
The total number of elements in 'input_vector' must be equal to the sum of the products of the shapes in 'target_shapes'.
PARAMETER | DESCRIPTION |
---|---|
input_vector |
The 1D tensor to be reshaped. Must be 1D.
TYPE:
|
target_shapes |
An iterable of tuples. Each tuple defines the shape of a tensor to be reshaped from the 'input_vector'. |
RETURNS | DESCRIPTION |
---|---|
Tuple[Tensor, ...]
|
A tuple of reshaped tensors. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If 'input_vector' is not a 1D tensor or if the total number of elements in 'input_vector' does not match the sum of the products of the shapes in 'target_shapes'. |
Source code in src/pydvl/influence/torch/util.py
align_structure
¶
align_structure(
source: Mapping[str, Tensor], target: TorchTensorContainerType
) -> Dict[str, Tensor]
This function transforms target
to have the same structure as source
, i.e.,
it should be a dictionary with the same keys as source
and each corresponding
value in target
should have the same shape as the value in source
.
PARAMETER | DESCRIPTION |
---|---|
source |
The reference dictionary containing PyTorch tensors. |
target |
The input to be harmonized. It can be a dictionary, tuple, or tensor.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Dict[str, Tensor]
|
The harmonized version of |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If |
Source code in src/pydvl/influence/torch/util.py
align_with_model
¶
align_with_model(x: TorchTensorContainerType, model: Module)
Aligns an input to the model's parameter structure, i.e. transforms it into a dict with the same keys as model.named_parameters() and matching tensor shapes
PARAMETER | DESCRIPTION |
---|---|
x |
The input to be aligned. It can be a dictionary, tuple, or tensor.
TYPE:
|
model |
model to use for alignment
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
The aligned version of |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If |
Source code in src/pydvl/influence/torch/util.py
flatten_dimensions
¶
flatten_dimensions(
tensors: Iterable[Tensor],
shape: Optional[Tuple[int, ...]] = None,
concat_at: int = -1,
) -> Tensor
Flattens the dimensions of each tensor in the given iterable and concatenates them along a specified dimension.
This function takes an iterable of PyTorch tensors and flattens each tensor.
Optionally, each tensor can be reshaped to a specified shape before concatenation.
The concatenation is performed along the dimension specified by concat_at
.
PARAMETER | DESCRIPTION |
---|---|
tensors |
An iterable containing PyTorch tensors to be flattened and concatenated. |
shape |
A tuple representing the desired shape to which each tensor is reshaped before concatenation. If None, tensors are flattened to 1D. |
concat_at |
The dimension along which to concatenate the tensors.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
A single tensor resulting from the concatenation of the input tensors, |
Tensor
|
each either flattened or reshaped as specified. |
Example
Source code in src/pydvl/influence/torch/util.py
torch_dataset_to_dask_array
¶
torch_dataset_to_dask_array(
dataset: Dataset,
chunk_size: int,
total_size: Optional[int] = None,
resulting_dtype: Type[number] = np.float32,
) -> Tuple[Array, ...]
Construct tuple of dask arrays from a PyTorch dataset, using dask.delayed
PARAMETER | DESCRIPTION |
---|---|
dataset |
A PyTorch dataset
TYPE:
|
chunk_size |
The size of the chunks for the resulting Dask arrays.
TYPE:
|
total_size |
If the dataset does not implement len, provide the length via this parameter. If None the length of the dataset is inferred via accessing the dataset once. |
resulting_dtype |
The dtype of the resulting dask.array.Array |
Example
RETURNS | DESCRIPTION |
---|---|
Tuple[Array, ...]
|
Tuple of Dask arrays corresponding to each tensor in the dataset. |
Source code in src/pydvl/influence/torch/util.py
255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 |
|
empirical_cross_entropy_loss_fn
¶
Computes the empirical cross entropy loss of the model output. This is the cross entropy loss of the model output without the labels. The function takes all the usual arguments and keyword arguments of the cross entropy loss function, so that it is compatible with the PyTorch cross entropy loss function. However, it ignores everything except the first argument, which is the model output.
PARAMETER | DESCRIPTION |
---|---|
model_output |
The output of the model.
TYPE:
|
Source code in src/pydvl/influence/torch/util.py
safe_torch_linalg_eigh
¶
A wrapper around torch.linalg.eigh
that safely handles potential runtime errors
by raising a custom TorchLinalgEighException
with more context,
especially related to the issues reported in
https://github.com/pytorch/pytorch/issues/92141.
PARAMETER | DESCRIPTION |
---|---|
*args |
Positional arguments passed to
DEFAULT:
|
**kwargs |
Keyword arguments passed to
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
The result of calling |
RAISES | DESCRIPTION |
---|---|
TorchLinalgEighException
|
If a |