RadialFiltration

class gtda.images.RadialFiltration(center=None, radius=inf, metric='euclidean', metric_params=None, n_jobs=None)[source]

Filtrations of 2D/3D binary images based on distances to a reference pixel.

The radial filtration assigns to each pixel of a binary image a greyscale value computed as follows in terms of a reference pixel, called the “center”, and of a “radius”: if the binary pixel is active and lies within a ball defined by this center and this radius, then the assigned value equals this distance. In all other cases, the assigned value equals the maximum distance between any pixel of the image and the center pixel, plus one.

Parameters
  • center (ndarray of shape (n_dimensions_,) or None, optional, default: None) – Coordinates of the center pixel, where n_dimensions is the dimension of the images of the collection (2 or 3). None is equivalent to passing np.zeros(n_dimensions,)`.

  • radius (float or None, default: None) – The radius of the ball centered in center inside which activated pixels are included in the filtration.

  • metric (string or callable, optional, default: 'euclidean') – If set to 'precomputed', each entry in X along axis 0 is interpreted to be a distance matrix. Otherwise, entries are interpreted as feature arrays, and metric determines a rule with which to calculate distances between pairs of instances (i.e. rows) in these arrays. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in sklearn.pairwise.PAIRWISE_DISTANCE_FUNCTIONS, including “euclidean”, “manhattan” or “cosine”. If metric is a callable function, it is called on each pair of instances and the resulting value recorded. The callable should take two arrays from the entry in X as input, and return a value indicating the distance between them.

  • metric_params (dict or None, optional, default: None) – Additional keyword arguments for the metric function.

  • n_jobs (int or None, optional, default: None) – The number of jobs to use for the computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors.

n_dimensions\_

Dimension of the images. Set in fit.

Type

2 or 3

center\_

Effective center of the radial filtration. Set in fit.

Type

ndarray of shape (n_dimensions_,)

effective_metric_params\_

Dictionary containing all information present in metric_params. If metric_params is None, it is set to the empty dictionary.

Type

dict

mesh\_

greyscale image corresponding to the radial filtration of a binary image where each pixel is activated. Set in fit.

Type

ndarray of shape ( n_pixels_x, n_pixels_y [, n_pixels_z])

max_value\_

Maximum pixel value among all pixels in all images of the collection. Set in fit.

Type

float

References

[1] A. Garin and G. Tauzin, “A topological reading lesson: Classification

of MNIST using TDA”; 19th International IEEE Conference on Machine Learning and Applications (ICMLA 2020), 2019; arXiv: 1910.08345.

__init__(center=None, radius=inf, metric='euclidean', metric_params=None, n_jobs=None)[source]

Initialize self. See help(type(self)) for accurate signature.

fit(X, y=None)[source]

Calculate center_, effective_metric_params_, n_dimensions_, mesh_ and max_value_ from a collection of binary images. Then, return the estimator.

This method is here to implement the usual scikit-learn API and hence work in pipelines.

Parameters
  • X (ndarray of shape (n_samples, n_pixels_x, n_pixels_y [, n_pixels_z])) – Input data. Each entry along axis 0 is interpreted as a 2D or 3D binary image.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

self

Return type

object

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
  • X (ndarray of shape (n_samples, n_pixels_x, n_pixels_y [, n_pixels_z])) – Input data. Each entry along axis 0 is interpreted as a 2D or 3D binary image.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

Xt – n_pixels_y [, n_pixels_z]) Transformed collection of images. Each entry along axis 0 is a 2D or 3D greyscale image.

Return type

ndarray of shape (n_samples, n_pixels_x,

fit_transform_plot(X, y=None, sample=0, **plot_params)

Fit to data, then apply transform_plot.

Parameters
  • X (ndarray of shape (n_samples, ..)) – Input data.

  • y (ndarray of shape (n_samples,) or None) – Target values for supervised problems.

  • sample (int) – Sample to be plotted.

  • **plot_params – Optional plotting parameters.

Returns

Xt – Transformed one-sample slice from the input.

Return type

ndarray of shape (1, ..)

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

mapping of string to any

static plot(Xt, sample=0, colorscale='greys', origin='upper')[source]

Plot a sample from a collection of 2D greyscale images.

Parameters
  • Xt (ndarray of shape (n_samples, n_pixels_x, n_pixels_y)) – Collection of 2D greyscale images, such as returned by transform.

  • sample (int, optional, default: 0) – Index of the sample in Xt to be plotted.

  • colorscale (str, optional, default: 'greys') – Color scale to be used in the heat map. Can be anything allowed by plotly.graph_objects.Heatmap.

  • origin ('upper' | 'lower', optional, default: 'upper') – Position of the [0, 0] pixel of data, in the upper left or lower left corner. The convention 'upper' is typically used for matrices and images.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

object

transform(X, y=None)[source]

For each binary image in the collection X, calculate a corresponding greyscale image based on the distance of its pixels to the center. Return the collection of greyscale images.

Parameters
  • X (ndarray of shape (n_samples, n_pixels_x, n_pixels_y [, n_pixels_z])) – Input data. Each entry along axis 0 is interpreted as a 2D or 3D binary image.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

Xt – n_pixels_y [, n_pixels_z]) Transformed collection of images. Each entry along axis 0 is a 2D or 3D greyscale image.

Return type

ndarray of shape (n_samples, n_pixels_x,

transform_plot(X, sample=0, **plot_params)

Take a one-sample slice from the input collection and transform it. Before returning the transformed object, plot the transformed sample.

Parameters
  • X (ndarray of shape (n_samples, ..)) – Input data.

  • sample (int) – Sample to be plotted.

  • plot_params (dict) – Optional plotting parameters.

Returns

Xt – Transformed one-sample slice from the input.

Return type

ndarray of shape (1, ..)