AdversarialTensors.btensor package#

Submodules#

AdversarialTensors.btensor.svd module#

AdversarialTensors.btensor.svd.randomized_range_finder(A, n_dims, n_iter=2, random_state=0)[source]#

Computes an orthonormal matrix (Q) whose range approximates the range of A, i.e., Q Q^H A ≈ A

Parameters:
  • A (3D-array) --

  • n_dims (int, dimension of the returned subspace) --

  • n_iter (int, number of power iterations to conduct (default = 2)) --

  • random_state ({None, int, np.random.RandomState}) --

Returns:

Q -- of shape (batch, A.shape[0], min(n_dims, A.shape[0], A.shape[1]))

Return type:

3D-array

Notes

This function is implemented based on Algorith 4.4 in Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions - Halko et al (2009)

AdversarialTensors.btensor.svd.randomized_svd(matrix, n_eigenvecs=None, n_oversamples=5, n_iter=2, random_state=0, **kwargs)[source]#

Computes a truncated randomized SVD.

If n_eigenvecs is specified, sparse eigendecomposition is used on either matrix.dot(matrix.T) or matrix.T.dot(matrix).

Parameters:
  • matrix (tensor) -- A 2D tensor.

  • n_eigenvecs (int, optional, default is None) -- If specified, number of eigen[vectors-values] to return.

  • n_oversamples (int, optional, default = 5) -- rank overestimation value for finiding subspace with better allignment

  • n_iter (int, optional, default = 2) -- number of power iterations for the randomized_range_finder subroutine

  • random_state ({None, int, np.random.RandomState}) --

  • **kwargs (optional) -- kwargs are used to absorb the difference of parameters among the other SVD functions

Returns:

  • U (2-D tensor, shape (matrix.shape[0], n_eigenvecs)) -- Contains the right singular vectors

  • S (1-D tensor, shape (n_eigenvecs, )) -- Contains the singular values of matrix

  • V (2-D tensor, shape (n_eigenvecs, matrix.shape[1])) -- Contains the left singular vectors

Notes

This function is implemented based on Algorith 5.1 in Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions - Halko et al (2009)

AdversarialTensors.btensor.svd.svd_checks(matrix, n_eigenvecs=None)[source]#

Runs common checks to all of the SVD methods.

Parameters:
  • matrix (3D-array) --

  • n_eigenvecs (int, optional, default is None) -- if specified, number of eigen[vectors-values] to return

Returns:

  • n_eigenvecs (int) -- the number of eigenvectors to solve for

  • min_dim (int) -- the minimum dimension of matrix

  • max_dim (int) -- the maximum dimension of matrix

AdversarialTensors.btensor.svd.svd_flip(U, V, u_based_decision=True)[source]#

Sign correction to ensure deterministic output from SVD. Adjusts the columns of u and the rows of v such that the loadings in the columns in u that are largest in absolute value are always positive. This function is borrowed from scikit-learn/utils/extmath.py

Parameters:
  • U (ndarray) -- u and v are the output of SVD

  • V (ndarray) -- u and v are the output of SVD

  • u_based_decision (boolean, (default=True)) -- If True, use the columns of u as the basis for sign flipping. Otherwise, use the rows of v. The choice of which variable to base the decision on is generally algorithm dependent.

Returns:

u_adjusted, v_adjusted

Return type:

arrays with the same dimensions as the input.

AdversarialTensors.btensor.svd.svd_interface(matrix, method='truncated_svd', n_eigenvecs=None, flip_sign=True, u_based_flip_sign=True, non_negative=None, mask=None, n_iter_mask_imputation=5, **kwargs)[source]#

Dispatching function to various SVD algorithms, alongside additional properties such as resolving sign invariance, imputation, and non-negativity.

Parameters:
  • matrix (tensor) -- A 2D tensor.

  • method (str, default is 'truncated_svd') -- Function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS or a callable.

  • n_eigenvecs (int, optional, default is None) -- If specified, number of eigen[vectors-values] to return.

  • flip_sign (bool, optional, default is True) -- Whether to resolve the sign indeterminacy of SVD.

  • u_based_flip_sign (bool, optional, default is True) -- Whether the sign indeterminacy should be resolved using U (vs. V).

  • non_negative (bool, optional, default is False) -- Whether to make the SVD results non-negative.

  • nn_type (str, default is 'nndsvd') -- Algorithm to use for converting U to be non-negative.

  • mask (tensor, default is None.) -- Array of booleans with the same shape as matrix. Should be 0 where the values are missing and 1 everywhere else. None if nothing is missing.

  • n_iter_mask_imputation (int, default is 5) -- Number of repetitions to apply in missing value imputation.

  • **kwargs (optional) -- Arguments passed along to individual SVD algorithms.

Returns:

  • U (2-D tensor, shape (matrix.shape[0], n_eigenvecs)) -- Contains the right singular vectors of matrix

  • S (1-D tensor, shape (n_eigenvecs, )) -- Contains the singular values of matrix

  • V (2-D tensor, shape (n_eigenvecs, matrix.shape[1])) -- Contains the left singular vectors of matrix

AdversarialTensors.btensor.svd.truncated_svd(matrix, n_eigenvecs=None, **kwargs)[source]#

Computes a truncated SVD on matrix using the backends's standard SVD

Parameters:
  • matrix (2D-array) --

  • n_eigenvecs (int, optional, default is None) -- if specified, number of eigen[vectors-values] to return

Returns:

  • U (2D-array) -- of shape (matrix.shape[0], n_eigenvecs) contains the right singular vectors

  • S (1D-array) -- of shape (n_eigenvecs, ) contains the singular values of matrix

  • V (2D-array) -- of shape (n_eigenvecs, matrix.shape[1]) contains the left singular vectors

AdversarialTensors.btensor.tenalg module#

AdversarialTensors.btensor.tenalg.batch_mode_dot(X, v, mode, transpose=False)[source]#

Perform batched mode-dot product along a specific mode.

Parameters: - X: The input tensor - v: The matrix to be multiplied - mode: The mode along which to multiply - transpose: Whether to transpose the matrix v before multiplication

Returns: - The result of the mode-dot operation

AdversarialTensors.btensor.tenalg.batch_multi_mode_dot(X, v, modes=None, transpose=False, skip=None)[source]#

Perform batched multi-mode dot product.

Parameters: - X: The input tensor - v: The list of matrices to be multiplied - modes: The modes along which to multiply each matrix - transpose: Whether to transpose the matrices before multiplication - skip: If specified, the mode to skip during multiplication

Returns: - The result of the multi-mode dot operation

AdversarialTensors.btensor.tenalg.batch_ttm(X, v, mode, transpose=False)[source]#

Perform batched tensor-times-matrix (TTM) multiplication along given mode(s).

Parameters: - X: The input tensor - v: The matrix (or list of matrices) to be multiplied - mode: The mode(s) along which to multiply - transpose: Whether to transpose the matrix before multiplication

Returns: - The result of the TTM operation

AdversarialTensors.btensor.tenalg.batched_naive_rank_estimator(eigsumthresh, eigvals)[source]#

Batched estimation of tensor rank based on a given eigenvalue sum threshold.

Parameters: - eigsumthresh: The eigenvalue sum threshold - eigvals: The eigenvalues

Returns: - Estimated ranks for each sample in the batch

AdversarialTensors.btensor.tenalg.batched_reconstruct(data, factors)[source]#

Perform batched reconstruction of tensor from its factors.

Parameters: - data: The input tensor - factors: The factor matrices

Returns: - The reconstructed tensor

AdversarialTensors.btensor.tenalg.batched_regress_core(data, factors)[source]#

Perform batched regression on the core tensor.

Parameters: - data: The input tensor - factors: The factor matrices

Returns: - The core tensor after regression

AdversarialTensors.btensor.tucker module#

AdversarialTensors.btensor.tucker.initialize_tucker(tensor, rank, modes, random_state, init='svd', svd='truncated_svd', non_negative=False)[source]#

Initializes core and factors used in the Tucker decomposition. The type of initialization is set using the texttt{init} parameter. If texttt{init} is set to 'random', factor matrices are initialized using texttt{random_state}. If texttt{init} is set to 'svd', the ( m )th factor matrix is initialized using the texttt{rank} left singular vectors of the ( m )th unfolding of the input tensor.

Parameters:
  • tensor (torch.Tensor) -- Input tensor to be decomposed.

  • rank (int) -- Number of components.

  • modes (list of int) -- List of modes for Tucker decomposition.

  • random_state (int) -- Random seed for initialization.

  • init ({'svd', 'random'}, optional) -- Type of initialization ('svd' or 'random').

  • svd (str, default is 'truncated_svd') -- Function to use for SVD, acceptable values in tensorly.SVD_FUNS.

  • non_negative (bool, default is False) -- If True, non-negative factors are returned. Has no effect for now.

Returns:

A tuple containing: - core (torch.Tensor): Initialized core tensor. - factors (list): List of initialized factor matrices.

Return type:

tuple

AdversarialTensors.btensor.tucker.partial_tucker(tensor, rank, modes=None, n_iter_max=100, init='svd', tol=0.0001, svd='truncated_svd', random_state=0, verbose=False, mask=None, svd_mask_repeats=5)[source]#

Partial tucker decomposition via Higher Order Orthogonal Iteration (HOI)

Decomposes tensor into a Tucker decomposition exclusively along the provided modes.

Parameters:
  • tensor (ndarray) --

  • rank (None, int or int list) -- size of the core tensor, (len(ranks) == tensor.ndim) if int, the same rank is used for all modes

  • modes (None, int list) -- list of the modes on which to perform the decomposition

  • n_iter_max (int) -- maximum number of iteration

  • init ({'svd', 'random'}, or TuckerTensor optional) -- if a TuckerTensor is provided, this is used for initialization

  • svd (str, default is 'truncated_svd') -- function to use to compute the SVD, acceptable values in tensorly.tenalg.svd.SVD_FUNS

  • tol (float, optional) -- tolerance: the algorithm stops when the variation in the reconstruction error is less than the tolerance

  • random_state ({None, int, np.random.RandomState}) --

  • verbose (int, optional) -- level of verbosity

  • mask (ndarray) -- array of booleans with the same shape as tensor should be 0 where the values are missing and 1 everywhere else. Note: if tensor is sparse, then mask should also be sparse with a fill value of 1 (or True).

Returns:

  • core (ndarray) -- core tensor of the Tucker decomposition

  • factors (ndarray list) -- list of factors of the Tucker decomposition. with core.shape[i] == (tensor.shape[i], ranks[i]) for i in modes

AdversarialTensors.btensor.tucker.tucker(tensor, rank, fixed_factors=None, n_iter_max=100, init='svd', return_errors=False, svd='truncated_svd', tol=0.0001, random_state=0, mask=None, verbose=False)[source]#

Tucker decomposition via Higher Order Orthogonal Iteration (HOI)

Decomposes tensor into a Tucker decomposition: tensor = [| core; factors[0], ...factors[-1] |] [1]

Parameters:
  • tensor (torch.tensor) --

  • rank (None, int or int list) -- size of the core tensor, (len(ranks) == tensor.ndim - 1) if int, the same rank is used for all modes

  • fixed_factors (int list or None, default is None) -- if not None, list of modes for which to keep the factors fixed. Only valid if a Tucker tensor is provided as init.

  • n_iter_max (int) -- maximum number of iteration

  • init ({'svd', 'random'}, optional) --

  • return_errors (boolean) -- Indicates whether the algorithm should return all reconstruction errors and computation time of each iteration or not Default: False

  • svd (str, default is 'truncated_svd') -- function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS

  • tol (float, optional) -- tolerance: the algorithm stops when the variation in the reconstruction error is less than the tolerance

  • random_state ({None, int, np.random.RandomState}) --

  • mask (ndarray) -- array of booleans with the same shape as tensor should be 0 where the values are missing and 1 everywhere else. Note: if tensor is sparse, then mask should also be sparse with a fill value of 1 (or True).

  • verbose (int, optional) -- level of verbosity

Returns:

  • core (torch.tensor of size ranks) -- core tensor of the Tucker decomposition

  • factors (torch.tensor list) -- list of factors of the Tucker decomposition. Its i-th element is of shape (tensor.shape[i], ranks[i])

References

Module contents#