ImageToPointCloud

class gtda.images.ImageToPointCloud(n_jobs=None)[source]

Represent active pixels in 2D/3D binary images as points in 2D/3D space.

The coordinates of each point is calculated as follows. For each activated pixel, assign coordinates that are the pixel index on this image, after flipping the rows and then swapping between rows and columns.

This transformer is meant to transform a collection of images to a collection of point clouds so that persistent homology calculations can be performed.

Parameters

n_jobs (int or None, optional, default: None) – The number of jobs to use for the computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors.

References

[1] A. Garin and G. Tauzin, “A topological reading lesson: Classification

of MNIST using TDA”; 19th International IEEE Conference on Machine Learning and Applications (ICMLA 2020), 2019; arXiv: 1910.08345.

__init__(n_jobs=None)[source]

Initialize self. See help(type(self)) for accurate signature.

fit(X, y=None)[source]

Do nothing and return the estimator unchanged. This method is here to implement the usual scikit-learn API and hence work in pipelines.

Parameters
  • X (ndarray of shape (n_samples, n_pixels_x, n_pixels_y [, n_pixels_z])) – Input data. Each entry along axis 0 is interpreted as a 2D or 3D binary image.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

self

Return type

object

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
  • X (ndarray of shape (n_samples, n_pixels_x, n_pixels_y [, n_pixels_z])) – Input data. Each entry along axis 0 is interpreted as a 2D or 3D binary image.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

Xt – n_dimensions) Transformed collection of images. Each entry along axis 0 is a point cloud in n_dimensions-dimensional space.

Return type

ndarray of shape (n_samples, n_pixels_x * n_pixels_y [* n_pixels_z],

fit_transform_plot(X, y=None, sample=0, **plot_params)

Fit to data, then apply transform_plot.

Parameters
  • X (ndarray of shape (n_samples, ..)) – Input data.

  • y (ndarray of shape (n_samples,) or None) – Target values for supervised problems.

  • sample (int) – Sample to be plotted.

  • **plot_params – Optional plotting parameters.

Returns

Xt – Transformed one-sample slice from the input.

Return type

ndarray of shape (1, ..)

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

mapping of string to any

static plot(Xt, sample=0)[source]

Plot a sample from a collection of point clouds. If the point cloud is in more than three dimensions, only the first three are plotted.

Parameters
  • Xt (ndarray of shape (n_samples, n_points, n_dimensions)) – Collection of point clouds in n_dimension-dimensional space, such as returned by transform.

  • sample (int, optional, default: 0) – Index of the sample in Xt to be plotted.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

object

transform(X, y=None)[source]

For each collection of binary images, calculate the corresponding collection of point clouds based on the coordinates of activated pixels.

Parameters
  • X (ndarray of shape (n_samples, n_pixels_x, n_pixels_y [, n_pixels_z])) – Input data. Each entry along axis 0 is interpreted as a 2D or 3D binary image.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

Xt – n_dimensions) Transformed collection of images. Each entry along axis 0 is a point cloud in n_dimensions-dimensional space.

Return type

ndarray of shape (n_samples, n_pixels_x * n_pixels_y [* n_pixels_z],

transform_plot(X, sample=0, **plot_params)

Take a one-sample slice from the input collection and transform it. Before returning the transformed object, plot the transformed sample.

Parameters
  • X (ndarray of shape (n_samples, ..)) – Input data.

  • sample (int) – Sample to be plotted.

  • plot_params (dict) – Optional plotting parameters.

Returns

Xt – Transformed one-sample slice from the input.

Return type

ndarray of shape (1, ..)