Glossary¶
This glossary is meant to provide only brief explanations of each term, helping to clarify the concepts and techniques used in the library. For more detailed information, please refer to the relevant literature or resources.
Warning
This glossary is still a work in progress. Pull requests are welcome!
Terms in data valuation and influence functions:
Arnoldi Method¶
The Arnoldi method approximately computes eigenvalue, eigenvector pairs of a symmetric matrix. For influence functions, it is used to approximate the iHVP. Introduced by (Schioppa et al., 2022)^{1} in the context of influence functions.
Block Conjugate Gradient¶
A blocked version of CG, which solves several linear systems simultaneously. For Influence Functions, it is used to approximate the iHVP.
Classwise Shapley¶
Classwise Shapley is a Shapley valuation method which introduces a utility function that balances inclass, and outofclass accuracy, with the goal of favoring points that improve the model's performance on the class they belong to. It is estimated to be particularly useful in imbalanced datasets, but more research is needed to confirm this. Introduced by (Schoch et al., 2022)^{2}.
Conjugate Gradient¶
CG is an algorithm for solving linear systems with a symmetric and positivedefinite coefficient matrix. For Influence Functions, it is used to approximate the iHVP.
Data Utility Learning¶
Data Utility Learning is a method that uses an ML model to learn the utility function. Essentially, it learns to predict the performance of a model when trained on a given set of indices from the dataset. The cost of training this model is quickly amortized by avoiding costly reevaluations of the original utility. Introduced by (Wang et al., 2022)^{3}.
Eigenvaluecorrected KroneckerFactored Approximate Curvature¶
EKFAC builds on KFAC by correcting for the approximation errors in the eigenvalues of the blocks of the Kroneckerfactored approximate curvature matrix. This correction aims to refine the accuracy of natural gradient approximations, thus potentially offering better training efficiency and stability in neural networks.
Group Testing¶
Group Testing is a strategy for identifying characteristics within groups of items efficiently, by testing groups rather than individuals to quickly narrow down the search for items with specific properties. Introduced into data valuation by (Jia et al., 2019)^{4}.
Influence Function¶
The Influence Function measures the impact of a single data point on a statistical estimator. In machine learning, it's used to understand how much a particular data point affects the model's prediction. Introduced into data valuation by (Koh and Liang, 2017)^{5}.
Inverse Hessianvector product¶
iHVP is the operation of calculating the product of the inverse Hessian matrix of a function and a vector, without explicitly constructing nor inverting the full Hessian matrix first. This is essential for influence function computation.
KroneckerFactored Approximate Curvature¶
KFAC is an optimization technique that approximates the Fisher Information matrix's inverse efficiently. It uses the Kronecker product to factor the matrix, significantly speeding up the computation of natural gradient updates and potentially improving training efficiency.
Least Core¶
The Least Core is a solution concept in cooperative game theory, referring to the smallest set of payoffs to players that cannot be improved upon by any coalition, ensuring stability in the allocation of value. In data valuation, it implies solving a linear and a quadratic system whose constraints are determined by the evaluations of the utility function on every subset of the training data. Introduced as data valuation method by (Yan and Procaccia, 2021)^{6}.
Lineartime Stochastic Secondorder Algorithm¶
LiSSA is an efficient algorithm for approximating the inverse Hessianvector product, enabling faster computations in largescale machine learning problems, particularly for secondorder optimization. For Influence Functions, it is used to approximate the iHVP. Introduced by (Agarwal et al., 2017)^{7}.
LeaveOneOut¶
LOO in the context of data valuation refers to the process of evaluating the impact of removing individual data points on the model's performance. The value of a training point is defined as the marginal change in the model's performance when that point is removed from the training set.
Maximum Sample Reuse¶
MSR is a sampling method for data valuation that updates the value of every data point in one sample. This method can achieve much faster convergence. Introduced by (Wang and Jia, 2023)^{8}
Monte Carlo Least Core¶
MCLC is a variation of the Least Core that uses a reduced amount of constraints, sampled randomly from the powerset of the training data. Introduced by (Yan and Procaccia, 2021)^{6}.
Monte Carlo Shapley¶
MCS estimates the Shapley Value using a Monte Carlo approximation to the sum over subsets of the training set. This reduces computation to polynomial time at the cost of accuracy, but this loss is typically irrelevant for downstream applications in ML. Introduced into data valuation by (Ghorbani and Zou, 2019)^{9}.
Nyström LowRank Approximation¶
The Nyström approximation computes a lowrank approximation to a symmetric positivedefinite matrix via random projections. For influence functions, it is used to approximate the iHVP. Introduced as sketch and solve algorithm in (Hataya and Yamada, 2023)^{10}, and as preconditioner for PCG in (Frangella et al., 2023)^{11}.
 Implementation SketchandSolve (torch)
 Documentation SketchandSolve (torch)
 Implementation Preconditioner (torch)
Point removal task¶
A task in data valuation where the quality of a valuation method is measured through the impact of incrementally removing data points on the model's performance, where the points are removed in order of their value. See
Preconditioned Block Conjugate Gradient¶
A blocked version of PCG, which solves several linear systems simultaneously. For Influence Functions, it is used to approximate the iHVP.
Preconditioned Conjugate Gradient¶
A preconditioned version of CG for improved convergence, depending on the characteristics of the matrix and the preconditioner. For Influence Functions, it is used to approximate the iHVP.
Shapley Value¶
Shapley Value is a concept from cooperative game theory that allocates payouts to players based on their contribution to the total payoff. In data valuation, players are data points. The method assigns a value to each data point based on a weighted average of its marginal contributions to the model's performance when trained on each subset of the training set. This requires \(\mathcal{O}(2^{n1})\) retrainings of the model, which is infeasible for even trivial data set sizes, so one resorts to approximations like TMCS. Introduced into data valuation by (Ghorbani and Zou, 2019)^{9}.
Truncated Monte Carlo Shapley¶
TMCS is an efficient approach to estimating the Shapley Value using a truncated version of the Monte Carlo method, reducing computation time while maintaining accuracy in large datasets. Introduced by (Ghorbani and Zou, 2019)^{9}.
Weighted Accuracy Drop¶
WAD is a metric to evaluate the impact of sequentially removing data points on the performance of a machine learning model, weighted by their rank, i.e. by the time at which they were removed. Introduced by (Schoch et al., 2022)^{2}.
Other terms¶
Coefficient of Variation¶
CV is a statistical measure of the dispersion of data points in a data series around the mean, expressed as a percentage. It's used to compare the degree of variation from one data series to another, even if the means are drastically different.
Constraint Satisfaction Problem¶
A CSP involves finding values for variables within specified constraints or conditions, commonly used in scheduling, planning, and design problems where solutions must satisfy a set of restrictions.
OutofBag¶
OOB refers to data samples in an ensemble learning context (like random forests) that are not selected for training a specific model within the ensemble. These OOB samples are used as a validation set to estimate the model's accuracy, providing a convenient internal crossvalidation mechanism.
Machine Learning Reproducibility Challenge¶
The MLRC is an initiative that encourages the verification and replication of machine learning research findings, promoting transparency and reliability in the field. Papers are published in Transactions on Machine Learning Research (TMLR).

Schioppa, A., Zablotskaia, P., Vilar, D., Sokolov, A., 2022. Scaling Up Influence Functions. Proc. AAAI Conf. Artif. Intell. 36, 8179–8186. https://doi.org/10.1609/aaai.v36i8.20791 ↩

Schoch, S., Xu, H., Ji, Y., 2022. CSShapley: Classwise Shapley Values for Data Valuation in Classification, in: Proc. Of the ThirtySixth Conference on Neural Information Processing Systems (NeurIPS). Presented at the Advances in Neural Information Processing Systems (NeurIPS 2022). ↩↩

Wang, T., Yang, Y., Jia, R., 2022. Improving Cooperative Game Theorybased Data Valuation via Data Utility Learning. Presented at the International Conference on Learning Representations (ICLR 2022). Workshop on Socially Responsible Machine Learning, arXiv. https://doi.org/10.48550/arXiv.2107.06336 ↩

Jia, R., Dao, D., Wang, B., Hubis, F.A., Gurel, N.M., Li, B., Zhang, C., Spanos, C., Song, D., 2019. Efficient taskspecific data valuation for nearest neighbor algorithms. Proc. VLDB Endow. 12, 1610–1623. https://doi.org/10.14778/3342263.3342637 ↩

Koh, P.W., Liang, P., 2017. Understanding Blackbox Predictions via Influence Functions, in: Proceedings of the 34th International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 1885–1894. ↩

Yan, T., Procaccia, A.D., 2021. If You Like Shapley Then You’ll Love the Core, in: Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021. Presented at the AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence, pp. 5751–5759. https://doi.org/10.1609/aaai.v35i6.16721 ↩↩

Agarwal, N., Bullins, B., Hazan, E., 2017. SecondOrder Stochastic Optimization for Machine Learning in Linear Time. JMLR 18, 1–40. ↩

Wang, J.T., Jia, R., 2023. Data Banzhaf: A Robust Data Valuation Framework for Machine Learning, in: Proceedings of The 26th International Conference on Artificial Intelligence and Statistics. Presented at the International Conference on Artificial Intelligence and Statistics, PMLR, pp. 6388–6421. ↩

Ghorbani, A., Zou, J., 2019. Data Shapley: Equitable Valuation of Data for Machine Learning, in: Proceedings of the 36th International Conference on Machine Learning, PMLR. Presented at the International Conference on Machine Learning (ICML 2019), PMLR, pp. 2242–2251. ↩↩↩

Hataya, R., Yamada, M., 2023. Nyström Method for Accurate and Scalable Implicit Differentiation, in: Proceedings of The 26th International Conference on Artificial Intelligence and Statistics. Presented at the International Conference on Artificial Intelligence and Statistics, PMLR, pp. 4643–4654. ↩

Frangella, Z., Tropp, J.A., Udell, M., 2023. Randomized Nyström Preconditioning. SIAM J. Matrix Anal. Appl. 44, 718–752. https://doi.org/10.1137/21M1466244 ↩