Analysis

Interpretability

class gdeep.analysis.interpretability.AttributionFactory

Attribution factory class for the Captum integration. This factory will contain the attributions techniques of captum.

build(key: str, *args, **kwargs) Any

This method returns the DataLoader builder corresponding to the input key.

Args:
key:

the name of the dataset

register_builder(key: str, builder: Any)

this method adds to the internal builders dictionary new dataloader builders

class gdeep.analysis.interpretability.Interpreter(model: Module, method: str = 'IntegratedGradients')

Class to visualise the activation maps, the attribution maps and saliency maps using different techniques.

Args:
model:

the standard pytorch model

method:

the interpretability method. Find more info at https://captum.ai/tutorials/

feature_importance(x: Tensor, y: Tensor, **kwargs: Any) Tuple[Tensor, List[Any]]

This method creates a tabular data interpreter. This class is based on captum.

Args:
x:

the datum

y:

the target label

kwargs:

kwargs for the attributions

Returns:
(Tensor, Tensor):

returns x and its attribution

interpret(x: Tensor, y: Any | None = None, layer: Module | None = None, **kwargs: Any) Tuple[Tensor, Tensor]

This method creates a datum interpreter. This class is based on captum.

Args:
x:

the tensor corresponding to the input datum. In case the datum is an image for example, it is expected to be of size (b, c, h, w)

y:

the label we want to check the interpretability of.

layer (nn.Module, optional):

some methods will require to specify a layer of self.model

Returns
torch.Tensor, torch.Tensor:

the input datum and the attribution image respectively.

interpret_text(sentence: str, label: Any, vocab: Dict[str, int], tokenizer: Callable[[str], List[str]], layer: Module | None = None, min_len: int = 7, **kwargs) Tuple[str, Tensor]

This method creates a text interpreter. This class is based on captum.

Args:
sentence :

the input sentence

label:

the label we want to check the interpretability of.

vocab :

a gdeep.data.preprocessors vocabulary. Can be extracted via the vocabulary attribute.

tokenizer :

a gdeep.data.preprocessors tokenizer. Can be extracted via the tokenizer attribute.

layer :

torch module corresponding to the layer belonging to self.model.

min_len:

minimum length of the text. Shorter texts are padded

kwargs:

additional arguments for the attribution

gdeep.analysis.interpretability.get_attr(key: str, *args, **kwargs) Any

Get a dataset from the factory

Args:
key :

The name of the dataset, corresponding to the key in the list of builders

**kwargs:

The keyword arguments to pass to the dataset builder

Returns:
captum.attr:

The captum interpretability tool

Decision Boundary

class gdeep.analysis.decision_boundary.DecisionBoundaryCalculator

Abstract class for calculating the decision boundary of a neural network

abstract get_decision_boundary() Tensor

Return current state and does not make a step

Returns:
torch.tensor:

current estimate of the decision boundary

abstract step(number_of_steps=1)

Performs a single step towards the decision boundary

class gdeep.analysis.decision_boundary.GradientFlowDecisionBoundaryCalculator(model: Callable[[Tensor], Tensor], initial_points: Tensor, optimizer: Callable[[list], Optimizer])

Computes Decision Boundary using the gradient flow method

Args:
model (Callable[[torch.Tensor], torch.Tensor]):

Function that maps a torch.Tensor of shape (N, D_in) to a tensor either of shape (N) and with values in [0,1] or of shape (N, D_out) with values in [0, 1] such that the last axis sums to 1.

initial_points (torch.Tensor):

torch.Tensor of shape (N, D_in)

optimizer (Callable[[torch.Tensor], torch.optim.Optimizer]):

Function returning an optimizer for the params given as an argument.

get_decision_boundary() Tensor

Return current state and does not make a step

Returns:
torch.tensor:

current estimate of the decision boundary

step(number_of_steps=1)

Performs the indicated number of steps towards the decision boundary

class gdeep.analysis.decision_boundary.QuasihyperbolicDecisionBoundaryCalculator(model: Callable[[Tensor], Tensor], initial_points: Tensor, initial_vectors: Tensor, integrator=None)

Computes Decision Boundary by emanating quasihyperbolic geodesics

get_decision_boundary() Tensor

Return current state and does not make a step

Returns:
torch.tensor:

current estimate of the decision boundary

step(number_of_steps=1)

Performs the indicated number of steps towards the decision boundary

class gdeep.analysis.decision_boundary.UniformlySampledPoint(tuple_list: list, n_samples: int = 1000)

Generates n_samples uniformely random points in a box specified in tuple_list

Args:

tuple_list (list): list of dimensionwise upper and lower bounds of box n_samples (int, optional): number of sample points. Defaults to 1000.

get_dim()

Returns dimension of sample point cloud

Returns:

int: dimension of point cloud