Utility
- class gdeep.utility.ActivationFunction(value)
The activation function.
- class gdeep.utility.AttentionType(value)
The type of attention.
- class gdeep.utility.KnownWarningSilencer
silence all warnings within this
with
statement with this class
- class gdeep.utility.LayerNormStyle(value)
The style of layer normalization.
- class gdeep.utility.PoolerType(value)
An enumeration.
- gdeep.utility.autoreload_if_notebook() None
Autoreload the modules if the environment is a notebook
- Returns:
None
- gdeep.utility.ensemble_wrapper(clss: Type)
function to wrap the ensemble estimators of the
torchensemble
library.The only argument is the estimator class. Then you can initialise the output of this function as you would normally do for the original
torchensemble
class- Args
- clss:
the class of the estimator, like
VotingClassifier
for example
- Returns:
- type:
the initialised ensemble estimator class that is compatible with giotto-deep
- gdeep.utility.get_parameter_types(func: Callable) List[Any]
Returns a list of the types of the parameters of a function.
- Args:
func: The function to get the types of the parameters of.
- Returns:
A list of the types of the parameters of the function.
- gdeep.utility.get_return_type(func: Callable) Any
Returns the type of the return value of a function.
- Args:
func: The function to get the type of the return value of.
- Returns:
The type of the return value of the function.
- gdeep.utility.is_notebook() bool
Check if the current environment is a notebook
- Returns:
- bool:
True if the environment is a notebook, False otherwise
- gdeep.utility.save_model_and_optimizer(model: Module, model_name: None | str = None, trial_id: None | str = None, optimizer: Optimizer | None = None, store_pickle: bool = False)
Save the model and the optimizer state_dict
- Args:
- model (nn.Module):
the model to be saved
- model_name (str):
model name
- trial_id (str):
trial id to add to the name
- optimizer (torch.optim):
the optimizer to save
- store_pickle (bool, default False):
whether to store the pickle file of the model instead of the state_dict. The default is for state_dicts
- gdeep.utility.torch_transform(transform: Callable[[Tensor], Tensor] | Callable[[ndarray], ndarray]) Callable[[Tensor], Tensor]
Transforms a numpy array transform to a torch transform.
- Args:
- transform: Either a callable that takes a numpy array and returns a
numpy array or a callable that takes a tensor and returns a tensor.
- Returns:
The torch transform.
Persistence Gradient
- exception gdeep.utility.optimization.MissingClosureError
- class gdeep.utility.optimization.PersistenceGradient(homology_dimensions: List[int] | None, zeta: float = 0.5, collapse_edges: bool = False, max_edge_length: float = inf, approx_digits: int = 6, metric: str = 'euclidean', directed: bool = False)
This class computes the gradient of the persistence diagram with respect to a point cloud. The algorithms has first been developed in https://arxiv.org/abs/2010.08356 .
Disclaimer: this algorithm works well for generic point clouds. In case your point cloud has many simplices with same filtration values, the matching of the points to the persistent features may fail to disambiguate.
- Args:
- zeta :
the relative weight of the regularisation part of the
persistence_function
- homology_dimensions :
tuple of homology dimensions
- collapse_edges :
whether to use Collapse or not. Not implemented yet.
- max_edge_length :
the maximum edge length to be consider not infinity
- approx_digits :
digits after which to trunc floats for list comparison
- metric :
either
"euclidean"
or"precomputed"
. The second one is in case ofx
being the pairwise-distance matrix or the adjacency matrix of a graph.- directed :
whether the input graph is a directed graph or not. Relevant only if
metric = "precomputed"
Examples:
from gdeep.utility.optimization import PersistenceGradient # prepare the datum X = torch.tensor([[1, 0.], [0, 1.], [2, 2], [2, 1]]) # select the homology dimensions hom_dim = [0, 1] # initialise the class pg = PersistenceGradient(homology_dimensions=hom_dim, zeta=0.1, max_edge_length=3, collapse_edges=False) # run the optimisation for four epochs! pg.sgd(X, n_epochs=4, lr=0.4)
- persistence_function(xx: Tensor) Tensor
This is the Loss function to optimise. \(L=-\sum_i^p |\epsilon_{i2}-\epsilon_{i1}|+ \lambda \sum_{x \in X} ||x||_2^2\) It is composed of a regularisation term and a function on the filtration values that is (p,q)-permutation invariant.
- Args:
- xx:
this is the persistence function argument, a tensor
- Returns:
- Tensor:
the function value at
xx
- phi(x: Tensor) Tensor
This function is from \((R^d)^n\) to \(R^{|K|}\), where K is the top simplicial complex of the VR filtration. It is defined as: \(\Phi_{\sigma}(X)=max_{i,j \in \sigma}||x_i-x_j||.\)
- Args:
- x:
the argument of \(\Phi\), a tensor
- Returns:
- Tensor:
the value of \(\Phi\) at
x
- static powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)
up to max_length.
- sgd(xx: Tensor, lr: float = 0.01, n_epochs: int = 5) Tuple[Figure, Figure, List[float]]
This function is the core function of this class and uses the SGD method to move points around in order to optimise
persistence_function
- Args:
- xx:
2d tensor representing the point cloud, the first dimension is
n_points
while the secondn_features
- lr:
learning rate for the SGD
- n_epochs:
the number of gradient iterations
- Returns:
- fig, fig3d, loss_val:
respectively the plotly
quiver_plot
, plotlycone_plot
adds the list of loss function values over the epochs
- class gdeep.utility.optimization.SAM(params, base_optimizer=None, rho=0.05, adaptive=False, **kwargs)
- step(closure=None)
Performs a single optimization step (parameter update).
- Args:
- closure (callable): A closure that reevaluates the model and
returns the loss. Optional for most optimizers.
Note
Unless otherwise specified, this function should not modify the
.grad
field of the parameters.
- class gdeep.utility.optimization.SAMOptimizer(optimizer)
This is the class to be used in the pipeline.
First you need to initialise this class with the torch optimiser that you would like to apply SAM to.
Then, at the following call, the instance behaves extcly like the SAM optimiser
- Args:
- optimizer (torch.optim):
the optimiser class (not the instance)
Extended Persistence
- gdeep.utility.extended_persistence.graph_extended_persistence_hks(adj_mat: ndarray, diffusion_parameter: float = 1.0) OneHotEncodedPersistenceDiagram
Compute the extended persistence of a graph.
- Args:
- adj_mat:
The adjacency matrix of the graph.
- diffusion_parameter:
The diffusion parameter of the heat kernel.
- Returns:
- ndarray:
The extended persistence of the graph.