# segment

## Module: segment.benchmarks.bench_quickbundles

Benchmarks for QuickBundles

Run all benchmarks with:

import dipy.segment as dipysegment
dipysegment.bench()


With Pytest, Run this benchmark with:

pytest -svv -c bench.ini /path/to/bench_quickbundles.py
MDFpy
Metric Computes a distance between two sequential data.
QB_New alias of dipy.segment.clustering.QuickBundles
QB_Old alias of dipy.segment.quickbundles.QuickBundles
assert_array_equal(x, y[, err_msg, verbose]) Raises an AssertionError if two array_like objects are not equal.
assert_arrays_equal(arrays1, arrays2)
assert_equal(actual, desired[, err_msg, verbose]) Raises an AssertionError if two objects are not equal.
bench_quickbundles()
get_fnames([name]) provides filenames of some test datasets or other useful parametrisations
measure(code_str[, times, label]) Return elapsed time for executing code in the namespace of the caller.

## Module: segment.bundles

 BundleMinDistanceAsymmetricMetric([num_threads]) Asymmetric Bundle-based Minimum distance BundleMinDistanceMetric([num_threads]) Bundle-based Minimum Distance aka BMD BundleSumDistanceMatrixMetric([num_threads]) Bundle-based Sum Distance aka BMD RecoBundles(streamlines[, greater_than, …]) Methods StreamlineLinearRegistration([metric, x0, …]) Methods Streamlines alias of nibabel.streamlines.array_sequence.ArraySequence chain chain(*iterables) –> chain object afq_profile(data, bundle[, affine, …]) Calculates a summarized profile of data for a bundle or tract along its length. apply_affine(aff, pts) Apply affine matrix aff to points pts ba_analysis(recognized_bundle, expert_bundle) bundle_adjacency(dtracks0, dtracks1, threshold) Find bundle adjacency between two given tracks/bundles bundles_distances_mam Calculate distances between list of tracks A and list of tracks B bundles_distances_mdf Calculate distances between list of tracks A and list of tracks B check_range(streamline, gt, lt) gaussian_weights(bundle[, n_points, …]) Calculate weights for each streamline/node in a bundle, based on a Mahalanobis distance from the core the bundle, at that node (mean, per default). length Euclidean length of streamlines mahalanobis(u, v, VI) Compute the Mahalanobis distance between two 1-D arrays. nbytes(streamlines) orient_by_streamline(streamlines, standard) Orient a bundle of streamlines to a standard streamline. qbx_and_merge(streamlines, thresholds[, …]) Run QuickBundlesX and then run again on the centroids of the last layer select_random_set_of_streamlines(…[, rng]) Select a random set of streamlines set_number_of_points Change the number of points of streamlines time() Return the current time in seconds since the Epoch. values_from_volume(data, streamlines[, affine]) Extract values of a scalar/vector along each streamline from a volume.

## Module: segment.clustering

ABCMeta Metaclass for defining Abstract Base Classes (ABCs).
AveragePointwiseEuclideanMetric Computes the average of pointwise Euclidean distances between two sequential data.
Cluster([id, indices, refdata]) Provides functionalities for interacting with a cluster.
ClusterCentroid(centroid[, id, indices, refdata]) Provides functionalities for interacting with a cluster.
ClusterMap([refdata]) Provides functionalities for interacting with clustering outputs.
ClusterMapCentroid([refdata]) Provides functionalities for interacting with clustering outputs that have centroids.
Clustering

Methods

Identity Provides identity indexing functionality.
Metric Computes a distance between two sequential data.
MinimumAverageDirectFlipMetric Computes the MDF distance (minimum average direct-flip) between two sequential data.
QuickBundles(threshold[, metric, …]) Clusters streamlines using QuickBundles [Garyfallidis12].
QuickBundlesX(thresholds[, metric]) Clusters streamlines using QuickBundlesX.
ResampleFeature Extracts features from a sequential datum.
TreeCluster(threshold, centroid[, indices])
TreeClusterMap(root)
abstractmethod(funcobj) A decorator indicating abstract methods.
nbytes(streamlines)
qbx_and_merge(streamlines, thresholds[, …]) Run QuickBundlesX and then run again on the centroids of the last layer
set_number_of_points Change the number of points of streamlines
time() Return the current time in seconds since the Epoch.

## Module: segment.mask

 applymask(vol, mask) Mask vol with mask. binary_dilation(input[, structure, …]) Multi-dimensional binary dilation with the given structuring element. bounding_box(vol) Compute the bounding box of nonzero intensity voxels in the volume. clean_cc_mask(mask) Cleans a segmentation of the corpus callosum so no random pixels are included. color_fa(fa, evecs) Color fractional anisotropy of diffusion tensor crop(vol, mins, maxs) Crops the input volume. fractional_anisotropy(evals[, axis]) Fractional anisotropy (FA) of a diffusion tensor. generate_binary_structure(rank, connectivity) Generate a binary structure for binary morphological operations. median_filter(input[, size, footprint, …]) Calculate a multidimensional median filter. median_otsu(input_volume[, median_radius, …]) Simple brain extraction tool method for images from DWI data. multi_median(input, median_radius, numpass) Applies median filter multiple times on input data. otsu(image[, nbins]) Return threshold value based on Otsu’s method. segment_from_cfa(tensor_fit, roi, threshold) Segment the cfa inside roi using the values from threshold as bounds. warn Issue a warning, or maybe ignore it or raise an exception.

## Module: segment.metric

 ArcLengthFeature Extracts features from a sequential datum. AveragePointwiseEuclideanMetric Computes the average of pointwise Euclidean distances between two sequential data. CenterOfMassFeature Extracts features from a sequential datum. CosineMetric Computes the cosine distance between two vectors. EuclideanMetric alias of dipy.segment.metricspeed.SumPointwiseEuclideanMetric Feature Extracts features from a sequential datum. IdentityFeature Extracts features from a sequential datum. Metric Computes a distance between two sequential data. MidpointFeature Extracts features from a sequential datum. MinimumAverageDirectFlipMetric Computes the MDF distance (minimum average direct-flip) between two sequential data. ResampleFeature Extracts features from a sequential datum. SumPointwiseEuclideanMetric Computes the sum of pointwise Euclidean distances between two sequential data. VectorOfEndpointsFeature Extracts features from a sequential datum. dist Computes a distance between datum1 and datum2. distance_matrix Computes the distance matrix between two lists of sequential data. mdf(s1, s2) Computes the MDF (Minimum average Direct-Flip) distance [Garyfallidis12] between two streamlines.

## Module: segment.quickbundles

QuickBundles(tracks[, dist_thr, pts])
bundles_distances_mdf Calculate distances between list of tracks A and list of tracks B
downsample(xyz[, n_pols]) downsample for a specific number of points along the curve/track
local_skeleton_clustering Efficient tractography clustering
warn Issue a warning, or maybe ignore it or raise an exception.

## Module: segment.threshold

 otsu(image[, nbins]) Return threshold value based on Otsu’s method. upper_bound_by_percent(data[, percent]) Find the upper bound for visualization of medical images upper_bound_by_rate(data[, rate]) Adjusts upper intensity boundary using rates

## Module: segment.tissue

 ConstantObservationModel Observation model assuming that the intensity of each class is constant. IteratedConditionalModes Methods TissueClassifierHMRF([save_history, verbose]) This class contains the methods for tissue classification using the Markov Random Fields modeling approach add_noise(signal, snr, S0[, noise_type]) Add noise of specified distribution to the signal from a single voxel.

### MDFpy

class dipy.segment.benchmarks.bench_quickbundles.MDFpy

Bases: dipy.segment.metricspeed.Metric

Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering

Methods

 are_compatible(shape1, shape2) Checks if features can be used by metric.dist based on their shape. dist(features1, features2) Computes a distance between two data points based on their features.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. are_compatible(shape1, shape2) Checks if features can be used by metric.dist based on their shape. Basically this method exists so we don’t have to do this check inside the metric.dist function (speedup). Parameters: shape1 : int, 1-tuple or 2-tuple shape of the first data point’s features shape2 : int, 1-tuple or 2-tuple shape of the second data point’s features are_compatible : bool whether or not shapes are compatible dist(features1, features2) Computes a distance between two data points based on their features. Parameters: features1 : 2D array Features of the first data point. features2 : 2D array Features of the second data point. double Distance between two data points. ### Metric class dipy.segment.benchmarks.bench_quickbundles.Metric Bases: object Computes a distance between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between extracted features, rather than directly between the sequential data. Parameters: feature : Feature object, optional It is used to extract features before computing the distance. Notes When subclassing Metric, one only needs to override the dist and are_compatible methods. Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering Methods  are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

are_compatible()

Checks if features can be used by metric.dist based on their shape.

Basically this method exists so we don’t have to do this check inside the metric.dist function (speedup).

Parameters: shape1 : int, 1-tuple or 2-tuple shape of the first data point’s features shape2 : int, 1-tuple or 2-tuple shape of the second data point’s features are_compatible : bool whether or not shapes are compatible
dist()

Computes a distance between two data points based on their features.

Parameters: features1 : 2D array Features of the first data point. features2 : 2D array Features of the second data point. double Distance between two data points.
feature

Feature object used to extract features from sequential data

is_order_invariant

Is this metric invariant to the sequence’s ordering

### QB_New

dipy.segment.benchmarks.bench_quickbundles.QB_New

### QB_Old

dipy.segment.benchmarks.bench_quickbundles.QB_Old

### assert_array_equal

dipy.segment.benchmarks.bench_quickbundles.assert_array_equal(x, y, err_msg='', verbose=True)

Raises an AssertionError if two array_like objects are not equal.

Given two array_like objects, check that the shape is equal and all elements of these objects are equal. An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions.

The usual caution for verifying equality with floating point numbers is advised.

Parameters: x : array_like The actual object to check. y : array_like The desired, expected object. err_msg : str, optional The error message to be printed in case of failure. verbose : bool, optional If True, the conflicting values are appended to the error message. AssertionError If actual and desired objects are not equal.

assert_allclose
Compare two array_like objects for equality with desired relative and/or absolute precision.

assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal

Examples

The first assert does not raise an exception:

>>> np.testing.assert_array_equal([1.0,2.33333,np.nan],
...                               [np.exp(0),2.33333, np.nan])


Assert fails with numerical inprecision with floats:

>>> np.testing.assert_array_equal([1.0,np.pi,np.nan],
...                               [1, np.sqrt(np.pi)**2, np.nan])
...
<type 'exceptions.ValueError'>:
AssertionError:
Arrays are not equal

(mismatch 50.0%)
x: array([ 1.        ,  3.14159265,         NaN])
y: array([ 1.        ,  3.14159265,         NaN])


Use assert_allclose or one of the nulp (number of floating point values) functions for these cases instead:

>>> np.testing.assert_allclose([1.0,np.pi,np.nan],
...                            [1, np.sqrt(np.pi)**2, np.nan],
...                            rtol=1e-10, atol=0)


### assert_arrays_equal

dipy.segment.benchmarks.bench_quickbundles.assert_arrays_equal(arrays1, arrays2)

### assert_equal

dipy.segment.benchmarks.bench_quickbundles.assert_equal(actual, desired, err_msg='', verbose=True)

Raises an AssertionError if two objects are not equal.

Given two objects (scalars, lists, tuples, dictionaries or numpy arrays), check that all elements of these objects are equal. An exception is raised at the first conflicting values.

Parameters: actual : array_like The object to check. desired : array_like The expected object. err_msg : str, optional The error message to be printed in case of failure. verbose : bool, optional If True, the conflicting values are appended to the error message. AssertionError If actual and desired are not equal.

Examples

>>> np.testing.assert_equal([4,5], [4,6])
...
<type 'exceptions.AssertionError'>:
Items are not equal:
item=1
ACTUAL: 5
DESIRED: 6


### bench_quickbundles

dipy.segment.benchmarks.bench_quickbundles.bench_quickbundles()

### get_fnames

dipy.segment.benchmarks.bench_quickbundles.get_fnames(name='small_64D')

provides filenames of some test datasets or other useful parametrisations

Parameters: name : str the filename/s of which dataset to return, one of: ‘small_64D’ small region of interest nifti,bvecs,bvals 64 directions ‘small_101D’ small region of interest nifti,bvecs,bvals 101 directions ‘aniso_vox’ volume with anisotropic voxel size as Nifti ‘fornix’ 300 tracks in Trackvis format (from Pittsburgh Brain Competition) ‘gqi_vectors’ the scanner wave vectors needed for a GQI acquisitions of 101 directions tested on Siemens 3T Trio ‘small_25’ small ROI (10x8x2) DTI data (b value 2000, 25 directions) ‘test_piesno’ slice of N=8, K=14 diffusion data ‘reg_c’ small 2D image used for validating registration ‘reg_o’ small 2D image used for validation registration ‘cb_2’ two vectorized cingulum bundles fnames : tuple filenames for dataset

Examples

>>> import numpy as np
>>> from dipy.data import get_fnames
>>> fimg,fbvals,fbvecs=get_fnames('small_101D')
>>> import nibabel as nib
>>> data=img.get_data()
>>> data.shape == (6, 10, 10, 102)
True
>>> bvals.shape == (102,)
True
>>> bvecs.shape == (102, 3)
True


### measure

dipy.segment.benchmarks.bench_quickbundles.measure(code_str, times=1, label=None)

Return elapsed time for executing code in the namespace of the caller.

The supplied code string is compiled with the Python builtin compile. The precision of the timing is 10 milli-seconds. If the code will execute fast on this timescale, it can be executed many times to get reasonable timing accuracy.

Parameters: code_str : str The code to be timed. times : int, optional The number of times the code is executed. Default is 1. The code is only compiled once. label : str, optional A label to identify code_str with. This is passed into compile as the second argument (for run-time error messages). elapsed : float Total elapsed time in seconds for executing code_str times times.

Examples

>>> etime = np.testing.measure('for i in range(1000): np.sqrt(i**2)',
...                            times=times)
>>> print("Time for a single execution : ", etime / times, "s")
Time for a single execution :  0.005 s


### BundleMinDistanceAsymmetricMetric

class dipy.segment.bundles.BundleMinDistanceAsymmetricMetric(num_threads=None)

Asymmetric Bundle-based Minimum distance

This is a cost function that can be used by the StreamlineLinearRegistration class.

Methods

 distance(xopt) Distance calculated from this Metric setup(static, moving) Setup static and moving sets of streamlines
__init__(num_threads=None)

An abstract class for the metric used for streamline registration

If the two sets of streamlines match exactly then method distance of this object should be minimum.

Parameters: num_threads : int Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
distance(xopt)

Distance calculated from this Metric

Parameters: xopt : sequence List of affine parameters as an 1D vector

### BundleMinDistanceMetric

class dipy.segment.bundles.BundleMinDistanceMetric(num_threads=None)

Bundle-based Minimum Distance aka BMD

This is the cost function used by the StreamlineLinearRegistration

References

 [Garyfallidis14] Garyfallidis et al., “Direct native-space fiber bundle alignment for group comparisons”, ISMRM, 2014.

Methods

 setup(static, moving) distance(xopt)
__init__(num_threads=None)

An abstract class for the metric used for streamline registration

If the two sets of streamlines match exactly then method distance of this object should be minimum.

Parameters: num_threads : int Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
distance(xopt)

Distance calculated from this Metric

Parameters: xopt : sequence List of affine parameters as an 1D vector,
setup(static, moving)

Setup static and moving sets of streamlines

Parameters: static : streamlines Fixed or reference set of streamlines. moving : streamlines Moving streamlines. num_threads : int Number of threads. If None (default) then all available threads will be used.

Notes

Call this after the object is initiated and before distance.

### BundleSumDistanceMatrixMetric

class dipy.segment.bundles.BundleSumDistanceMatrixMetric(num_threads=None)

Bundle-based Sum Distance aka BMD

This is a cost function that can be used by the StreamlineLinearRegistration class.

Notes

The difference with BundleMinDistanceMatrixMetric is that it uses uses the sum of the distance matrix and not the sum of mins.

Methods

 setup(static, moving) distance(xopt)
__init__(num_threads=None)

An abstract class for the metric used for streamline registration

If the two sets of streamlines match exactly then method distance of this object should be minimum.

Parameters: num_threads : int Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
distance(xopt)

Distance calculated from this Metric

Parameters: xopt : sequence List of affine parameters as an 1D vector

### RecoBundles

class dipy.segment.bundles.RecoBundles(streamlines, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=True)

Bases: object

Methods

 evaluate_results(model_bundle, …) Comapare the similiarity between two given bundles, model bundle, and extracted bundle. recognize(model_bundle, model_clust_thr[, …]) Recognize the model_bundle in self.streamlines refine(model_bundle, pruned_streamlines, …) Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input.
__init__(streamlines, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=True)

Recognition of bundles

Extract bundles from a participants’ tractograms using model bundles segmented from a different subject or an atlas of bundles. See [Garyfallidis17] for the details.

Parameters: streamlines : Streamlines The tractogram in which you want to recognize bundles. greater_than : int, optional Keep streamlines that have length greater than this value (default 50) less_than : int, optional Keep streamlines have length less than this value (default 1000000) cluster_map : QB map Provide existing clustering to start RB faster (default None). clust_thr : float Distance threshold in mm for clustering streamlines rng : RandomState If None define RandomState in initialization function. nb_pts : int Number of points per streamline (default 20)

Notes

Make sure that before creating this class that the streamlines and the model bundles are roughly in the same space. Also default thresholds are assumed in RAS 1mm^3 space. You may want to adjust those if your streamlines are not in world coordinates.

References

 [Garyfallidis17] (1, 2) Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
evaluate_results(model_bundle, pruned_streamlines, slr_select)

Comapare the similiarity between two given bundles, model bundle, and extracted bundle.

Parameters: model_bundle : Streamlines pruned_streamlines : Streamlines slr_select : tuple Select the number of streamlines from model to neirborhood of model to perform the local SLR. ba_value : float bundle analytics value between model bundle and pruned bundle bmd_value : float bundle minimum distance value between model bundle and pruned bundle
recognize(model_bundle, model_clust_thr, reduction_thr=10, reduction_distance='mdf', slr=True, slr_num_threads=None, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method='L-BFGS-B', pruning_thr=5, pruning_distance='mdf')

Recognize the model_bundle in self.streamlines

Parameters: model_bundle : Streamlines model_clust_thr : float reduction_thr : float reduction_distance : string mdf or mam (default mam) slr : bool Use Streamline-based Linear Registration (SLR) locally (default True) slr_metric : BundleMinDistanceMetric slr_x0 : array (default None) slr_bounds : array (default None) slr_select : tuple Select the number of streamlines from model to neirborhood of model to perform the local SLR. slr_method : string Optimization method (default ‘L-BFGS-B’) pruning_thr : float pruning_distance : string MDF (‘mdf’) and MAM (‘mam’) recognized_transf : Streamlines Recognized bundle in the space of the model tractogram recognized_labels : array Indices of recognized bundle in the original tractogram

References

 [Garyfallidis17] Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
refine(model_bundle, pruned_streamlines, model_clust_thr, reduction_thr=14, reduction_distance='mdf', slr=True, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method='L-BFGS-B', pruning_thr=6, pruning_distance='mdf')

Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input. It refines the first ouput of recobundle by applying second local slr (optional), and second pruning. This method is useful when we are dealing with noisy data or when we want to extract small tracks from tractograms.

Parameters: model_bundle : Streamlines pruned_streamlines : Streamlines model_clust_thr : float reduction_thr : float reduction_distance : string mdf or mam (default mam) slr : bool Use Streamline-based Linear Registration (SLR) locally (default True) slr_metric : BundleMinDistanceMetric slr_x0 : array (default None) slr_bounds : array (default None) slr_select : tuple Select the number of streamlines from model to neirborhood of model to perform the local SLR. slr_method : string Optimization method (default ‘L-BFGS-B’) pruning_thr : float pruning_distance : string MDF (‘mdf’) and MAM (‘mam’) recognized_transf : Streamlines Recognized bundle in the space of the model tractogram recognized_labels : array Indices of recognized bundle in the original tractogram

References

 [Garyfallidis17] Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.

### StreamlineLinearRegistration

class dipy.segment.bundles.StreamlineLinearRegistration(metric=None, x0='rigid', method='L-BFGS-B', bounds=None, verbose=False, options=None, evolution=False, num_threads=None)

Bases: object

Methods

 optimize(static, moving[, mat]) Find the minimum of the provided metric.
__init__(metric=None, x0='rigid', method='L-BFGS-B', bounds=None, verbose=False, options=None, evolution=False, num_threads=None)

Linear registration of 2 sets of streamlines [Garyfallidis15].

Parameters: metric : StreamlineDistanceMetric, If None and fast is False then the BMD distance is used. If fast is True then a faster implementation of BMD is used. Otherwise, use the given distance metric. x0 : array or int or str Initial parametrization for the optimization. If 1D array with: a) 6 elements then only rigid registration is performed with the 3 first elements for translation and 3 for rotation. b) 7 elements also isotropic scaling is performed (similarity). c) 12 elements then translation, rotation (in degrees), scaling and shearing is performed (affine). Here is an example of x0 with 12 elements: x0=np.array([0, 10, 0, 40, 0, 0, 2., 1.5, 1, 0.1, -0.5, 0]) This has translation (0, 10, 0), rotation (40, 0, 0) in degrees, scaling (2., 1.5, 1) and shearing (0.1, -0.5, 0). If int: 6 x0 = np.array([0, 0, 0, 0, 0, 0]) 7 x0 = np.array([0, 0, 0, 0, 0, 0, 1.]) 12 x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0]) If str: “rigid” x0 = np.array([0, 0, 0, 0, 0, 0]) “similarity” x0 = np.array([0, 0, 0, 0, 0, 0, 1.]) “affine” x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0]) method : str, ‘L_BFGS_B’ or ‘Powell’ optimizers can be used. Default is ‘L_BFGS_B’. bounds : list of tuples or None, If method == ‘L_BFGS_B’ then we can use bounded optimization. For example for the six parameters of rigid rotation we can set the bounds = [(-30, 30), (-30, 30), (-30, 30), (-45, 45), (-45, 45), (-45, 45)] That means that we have set the bounds for the three translations and three rotation axes (in degrees). verbose : bool, If True then information about the optimization is shown. options : None or dict, Extra options to be used with the selected method. evolution : boolean If True save the transformation for each iteration of the optimizer. Default is False. Supported only with Scipy >= 0.11. num_threads : int Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.

References

 [Garyfallidis15] (1, 2) Garyfallidis et al. “Robust and efficient linear registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015
 [Garyfallidis14] Garyfallidis et al., “Direct native-space fiber bundle alignment for group comparisons”, ISMRM, 2014.
 [Garyfallidis17] Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
optimize(static, moving, mat=None)

Find the minimum of the provided metric.

Parameters: static : streamlines Reference or fixed set of streamlines. moving : streamlines Moving set of streamlines. mat : array Transformation (4, 4) matrix to start the registration. mat is applied to moving. Default value None which means that initial transformation will be generated by shifting the centers of moving and static sets of streamlines to the origin. map : StreamlineRegistrationMap

### Streamlines

dipy.segment.bundles.Streamlines

alias of nibabel.streamlines.array_sequence.ArraySequence

### chain

class dipy.segment.bundles.chain

Bases: object

chain(*iterables) –> chain object

Return a chain object whose .__next__() method returns elements from the first iterable until it is exhausted, then elements from the next iterable, until all of the iterables are exhausted.

Methods

 from_iterable chain.from_iterable(iterable) –> chain object
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. register(subclass) Register a virtual subclass of an ABC. Returns the subclass, to allow usage as a class decorator. ### AveragePointwiseEuclideanMetric class dipy.segment.clustering.AveragePointwiseEuclideanMetric Bases: dipy.segment.metricspeed.SumPointwiseEuclideanMetric Computes the average of pointwise Euclidean distances between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between the features, rather than directly between the sequential data. Parameters: feature : Feature object, optional It is used to extract features before computing the distance. Notes The distance between two 2D sequential data: s1 s2 0* a *0 \ | \ | 1* | | b *1 | \ 2* \ c *2  is equal to $$(a+b+c)/3$$ where $$a$$ is the Euclidean distance between s1[0] and s2[0], $$b$$ between s1[1] and s2[1] and $$c$$ between s1[2] and s2[2]. Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering Methods  are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

### Cluster

class dipy.segment.clustering.Cluster(id=0, indices=None, refdata=<dipy.segment.clustering.Identity object>)

Bases: object

Provides functionalities for interacting with a cluster.

Useful container to retrieve index of elements grouped together. If a reference to the data is provided to cluster_map, elements will be returned instead of their index when possible.

Parameters: cluster_map : ClusterMap object Reference to the set of clusters this cluster is being part of. id : int Id of this cluster in its associated cluster_map object. refdata : list (optional) Actual elements that clustered indices refer to.

Notes

A cluster does not contain actual data but instead knows how to retrieve them using its ClusterMap object.

Methods

 assign(*indices) Assigns indices to this cluster.
__init__(id=0, indices=None, refdata=<dipy.segment.clustering.Identity object>)

Initialize self. See help(type(self)) for accurate signature.

assign(*indices)

Assigns indices to this cluster.

Parameters: *indices : list of indices Indices to add to this cluster.

### ClusterCentroid

class dipy.segment.clustering.ClusterCentroid(centroid, id=0, indices=None, refdata=<dipy.segment.clustering.Identity object>)

Provides functionalities for interacting with a cluster.

Useful container to retrieve the indices of elements grouped together and the cluster’s centroid. If a reference to the data is provided to cluster_map, elements will be returned instead of their index when possible.

Parameters: cluster_map : ClusterMapCentroid object Reference to the set of clusters this cluster is being part of. id : int Id of this cluster in its associated cluster_map object. refdata : list (optional) Actual elements that clustered indices refer to.

Notes

A cluster does not contain actual data but instead knows how to retrieve them using its ClusterMapCentroid object.

Methods

 assign(id_datum, features) Assigns a data point to this cluster. update() Update centroid of this cluster.
__init__(centroid, id=0, indices=None, refdata=<dipy.segment.clustering.Identity object>)

Initialize self. See help(type(self)) for accurate signature.

assign(id_datum, features)

Assigns a data point to this cluster.

Parameters: id_datum : int Index of the data point to add to this cluster. features : 2D array Data point’s features to modify this cluster’s centroid.
update()

Update centroid of this cluster.

Returns: converged : bool Tells if the centroid has moved.

### ClusterMap

class dipy.segment.clustering.ClusterMap(refdata=<dipy.segment.clustering.Identity object>)

Bases: object

Provides functionalities for interacting with clustering outputs.

Useful container to create, remove, retrieve and filter clusters. If refdata is given, elements will be returned instead of their index when using Cluster objects.

Parameters: refdata : list Actual elements that clustered indices refer to. clusters refdata

Methods

 add_cluster(*clusters) Adds one or multiple clusters to this cluster map. clear() Remove all clusters from this cluster map. clusters_sizes() Gets the size of every cluster contained in this cluster map. get_large_clusters(min_size) Gets clusters which contains at least min_size elements. get_small_clusters(max_size) Gets clusters which contains at most max_size elements. remove_cluster(*clusters) Remove one or multiple clusters from this cluster map. size() Gets number of clusters contained in this cluster map.
__init__(refdata=<dipy.segment.clustering.Identity object>)

Initialize self. See help(type(self)) for accurate signature.

add_cluster(*clusters)

Adds one or multiple clusters to this cluster map.

Parameters: *clusters : Cluster object, … Cluster(s) to be added in this cluster map.
clear()

Remove all clusters from this cluster map.

clusters
clusters_sizes()

Gets the size of every cluster contained in this cluster map.

Returns: list of int Sizes of every cluster in this cluster map.
get_large_clusters(min_size)

Gets clusters which contains at least min_size elements.

Parameters: min_size : int Minimum number of elements a cluster needs to have to be selected. list of Cluster objects Clusters having at least min_size elements.
get_small_clusters(max_size)

Gets clusters which contains at most max_size elements.

Parameters: max_size : int Maximum number of elements a cluster can have to be selected. list of Cluster objects Clusters having at most max_size elements.
refdata
remove_cluster(*clusters)

Remove one or multiple clusters from this cluster map.

Parameters: *clusters : Cluster object, … Cluster(s) to be removed from this cluster map.
size()

Gets number of clusters contained in this cluster map.

### ClusterMapCentroid

class dipy.segment.clustering.ClusterMapCentroid(refdata=<dipy.segment.clustering.Identity object>)

Provides functionalities for interacting with clustering outputs that have centroids.

Allows to retrieve easely the centroid of every cluster. Also, it is a useful container to create, remove, retrieve and filter clusters. If refdata is given, elements will be returned instead of their index when using ClusterCentroid objects.

Parameters: refdata : list Actual elements that clustered indices refer to. centroids clusters refdata

Methods

 add_cluster(*clusters) Adds one or multiple clusters to this cluster map. clear() Remove all clusters from this cluster map. clusters_sizes() Gets the size of every cluster contained in this cluster map. get_large_clusters(min_size) Gets clusters which contains at least min_size elements. get_small_clusters(max_size) Gets clusters which contains at most max_size elements. remove_cluster(*clusters) Remove one or multiple clusters from this cluster map. size() Gets number of clusters contained in this cluster map.
__init__(refdata=<dipy.segment.clustering.Identity object>)

Initialize self. See help(type(self)) for accurate signature.

centroids

### Clustering

class dipy.segment.clustering.Clustering

Bases: object

Methods

 cluster(data[, ordering]) Clusters data.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. cluster(data, ordering=None) Clusters data. Subclasses will perform their clustering algorithm here. Parameters: data : list of N-dimensional arrays Each array represents a data point. ordering : iterable of indices, optional Specifies the order in which data points will be clustered. ClusterMap object Result of the clustering. ### Identity class dipy.segment.clustering.Identity Bases: object Provides identity indexing functionality. This can replace any class supporting indexing used for referencing (e.g. list, tuple). Indexing an instance of this class will return the index provided instead of the element. It does not support slicing. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

### Metric

class dipy.segment.clustering.Metric

Bases: object

Computes a distance between two sequential data.

A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between extracted features, rather than directly between the sequential data.

Parameters: feature : Feature object, optional It is used to extract features before computing the distance.

Notes

When subclassing Metric, one only needs to override the dist and are_compatible methods.

Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering

Methods

 are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. are_compatible() Checks if features can be used by metric.dist based on their shape. Basically this method exists so we don’t have to do this check inside the metric.dist function (speedup). Parameters: shape1 : int, 1-tuple or 2-tuple shape of the first data point’s features shape2 : int, 1-tuple or 2-tuple shape of the second data point’s features are_compatible : bool whether or not shapes are compatible dist() Computes a distance between two data points based on their features. Parameters: features1 : 2D array Features of the first data point. features2 : 2D array Features of the second data point. double Distance between two data points. feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering ### MinimumAverageDirectFlipMetric class dipy.segment.clustering.MinimumAverageDirectFlipMetric Bases: dipy.segment.metricspeed.AveragePointwiseEuclideanMetric Computes the MDF distance (minimum average direct-flip) between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). Notes The distance between two 2D sequential data: s1 s2 0* a *0 \ | \ | 1* | | b *1 | \ 2* \ c *2  is equal to $$\min((a+b+c)/3, (a'+b'+c')/3)$$ where $$a$$ is the Euclidean distance between s1[0] and s2[0], $$b$$ between s1[1] and s2[1], $$c$$ between s1[2] and s2[2], $$a'$$ between s1[0] and s2[2], $$b'$$ between s1[1] and s2[1] and $$c'$$ between s1[2] and s2[0]. Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering Methods  are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

is_order_invariant

Is this metric invariant to the sequence’s ordering

### QuickBundles

class dipy.segment.clustering.QuickBundles(threshold, metric='MDF_12points', max_nb_clusters=2147483647)

Clusters streamlines using QuickBundles [Garyfallidis12].

Given a list of streamlines, the QuickBundles algorithm sequentially assigns each streamline to its closest bundle in $$\mathcal{O}(Nk)$$ where $$N$$ is the number of streamlines and $$k$$ is the final number of bundles. If for a given streamline its closest bundle is farther than threshold, a new bundle is created and the streamline is assigned to it except if the number of bundles has already exceeded max_nb_clusters.

Parameters: threshold : float The maximum distance from a bundle for a streamline to be still considered as part of it. metric : str or Metric object (optional) The distance metric to use when comparing two streamlines. By default, the Minimum average Direct-Flip (MDF) distance [Garyfallidis12] is used and streamlines are automatically resampled so they have 12 points. max_nb_clusters : int Limits the creation of bundles.

References

 [Garyfallidis12] (1, 2, 3, 4) Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.

Examples

>>> from dipy.segment.clustering import QuickBundles
>>> from dipy.data import get_fnames
>>> from nibabel import trackvis as tv
>>> streamlines = [i[0] for i in streams]
>>> # Segment fornix with a treshold of 10mm and streamlines resampled
>>> # to 12 points.
>>> qb = QuickBundles(threshold=10.)
>>> clusters = qb.cluster(streamlines)
>>> len(clusters)
4
>>> list(map(len, clusters))
[61, 191, 47, 1]
>>> # Resampling streamlines differently is done explicitly as follows.
>>> # Note this has an impact on the speed and the accuracy (tradeoff).
>>> from dipy.segment.metric import ResampleFeature
>>> from dipy.segment.metric import AveragePointwiseEuclideanMetric
>>> feature = ResampleFeature(nb_points=2)
>>> metric = AveragePointwiseEuclideanMetric(feature)
>>> qb = QuickBundles(threshold=10., metric=metric)
>>> clusters = qb.cluster(streamlines)
>>> len(clusters)
4
>>> list(map(len, clusters))
[58, 142, 72, 28]


Methods

 cluster(streamlines[, ordering]) Clusters streamlines into bundles.
__init__(threshold, metric='MDF_12points', max_nb_clusters=2147483647)

Initialize self. See help(type(self)) for accurate signature.

cluster(streamlines, ordering=None)

Clusters streamlines into bundles.

Performs quickbundles algorithm using predefined metric and threshold.

Parameters: streamlines : list of 2D arrays Each 2D array represents a sequence of 3D points (points, 3). ordering : iterable of indices Specifies the order in which data points will be clustered. ClusterMapCentroid object Result of the clustering.

### QuickBundlesX

class dipy.segment.clustering.QuickBundlesX(thresholds, metric='MDF_12points')

Clusters streamlines using QuickBundlesX.

Parameters: thresholds : list of float Thresholds to use for each clustering layer. A threshold represents the maximum distance from a cluster for a streamline to be still considered as part of it. metric : str or Metric object (optional) The distance metric to use when comparing two streamlines. By default, the Minimum average Direct-Flip (MDF) distance [Garyfallidis12] is used and streamlines are automatically resampled so they have 12 points.

References

 [Garyfallidis12] (1, 2) Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
 [Garyfallidis16] Garyfallidis E. et al. QuickBundlesX: Sequential clustering of millions of streamlines in multiple levels of detail at record execution time. Proceedings of the, International Society of Magnetic Resonance in Medicine (ISMRM). Singapore, 4187, 2016.

Methods

 cluster(streamlines[, ordering]) Clusters streamlines into bundles.
__init__(thresholds, metric='MDF_12points')

Initialize self. See help(type(self)) for accurate signature.

cluster(streamlines, ordering=None)

Clusters streamlines into bundles.

Performs QuickbundleX using a predefined metric and thresholds.

Parameters: streamlines : list of 2D arrays Each 2D array represents a sequence of 3D points (points, 3). ordering : iterable of indices Specifies the order in which data points will be clustered. TreeClusterMap object Result of the clustering.

### ResampleFeature

class dipy.segment.clustering.ResampleFeature

Bases: dipy.segment.featurespeed.CythonFeature

Extracts features from a sequential datum.

A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).

The features being extracted are the points of the sequence once resampled. This is useful for metrics requiring a constant number of points for all

streamlines.
Attributes: is_order_invariant Is this feature invariant to the sequence’s ordering

Methods

 extract Extracts features from a sequential datum. infer_shape Infers the shape of features extracted from a sequential datum.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. ### TreeCluster class dipy.segment.clustering.TreeCluster(threshold, centroid, indices=None) Attributes: is_leaf Methods  assign(id_datum, features) Assigns a data point to this cluster. update() Update centroid of this cluster.  add __init__(threshold, centroid, indices=None) Initialize self. See help(type(self)) for accurate signature. add(child) is_leaf ### TreeClusterMap class dipy.segment.clustering.TreeClusterMap(root) Attributes: clusters refdata Methods  add_cluster(*clusters) Adds one or multiple clusters to this cluster map. clear() Remove all clusters from this cluster map. clusters_sizes() Gets the size of every cluster contained in this cluster map. get_large_clusters(min_size) Gets clusters which contains at least min_size elements. get_small_clusters(max_size) Gets clusters which contains at most max_size elements. remove_cluster(*clusters) Remove one or multiple clusters from this cluster map. size() Gets number of clusters contained in this cluster map.  get_clusters iter_preorder traverse_postorder __init__(root) Initialize self. See help(type(self)) for accurate signature. get_clusters(wanted_level) iter_preorder(node) refdata traverse_postorder(node, visit) ### abstractmethod dipy.segment.clustering.abstractmethod(funcobj) A decorator indicating abstract methods. Requires that the metaclass is ABCMeta or derived from it. A class that has a metaclass derived from ABCMeta cannot be instantiated unless all of its abstract methods are overridden. The abstract methods can be called using any of the normal ‘super’ call mechanisms. Usage: class C(metaclass=ABCMeta): @abstractmethod def my_abstract_method(self, …): ### nbytes dipy.segment.clustering.nbytes(streamlines) ### qbx_and_merge dipy.segment.clustering.qbx_and_merge(streamlines, thresholds, nb_pts=20, select_randomly=None, rng=None, verbose=True) Run QuickBundlesX and then run again on the centroids of the last layer Running again QuickBundles at a layer has the effect of merging some of the clusters that maybe originally devided because of branching. This function help obtain a result at a QuickBundles quality but with QuickBundlesX speed. The merging phase has low cost because it is applied only on the centroids rather than the entire dataset. Parameters: streamlines : Streamlines thresholds : sequence List of distance thresholds for QuickBundlesX. nb_pts : int Number of points for discretizing each streamline select_randomly : int Randomly select a specific number of streamlines. If None all the streamlines are used. rng : RandomState If None then RandomState is initialized internally. verbose : bool If True print information in stdout. clusters : obj Contains the clusters of the last layer of QuickBundlesX after merging. References  [Garyfallidis12] Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.  [Garyfallidis16] Garyfallidis E. et al. QuickBundlesX: Sequential clustering of millions of streamlines in multiple levels of detail at record execution time. Proceedings of the, International Society of Magnetic Resonance in Medicine (ISMRM). Singapore, 4187, 2016. ### set_number_of_points dipy.segment.clustering.set_number_of_points() Change the number of points of streamlines (either by downsampling or upsampling) Change the number of points of streamlines in order to obtain nb_points-1 segments of equal length. Points of streamlines will be modified along the curve. Parameters: streamlines : ndarray or a list or dipy.tracking.Streamlines If ndarray, must have shape (N,3) where N is the number of points of the streamline. If list, each item must be ndarray shape (Ni,3) where Ni is the number of points of streamline i. If dipy.tracking.Streamlines, its common_shape must be 3. nb_points : int integer representing number of points wanted along the curve. new_streamlines : ndarray or a list or dipy.tracking.Streamlines Results of the downsampling or upsampling process. Examples >>> from dipy.tracking.streamline import set_number_of_points >>> import numpy as np  One streamline, a semi-circle: >>> theta = np.pi*np.linspace(0, 1, 100) >>> x = np.cos(theta) >>> y = np.sin(theta) >>> z = 0 * x >>> streamline = np.vstack((x, y, z)).T >>> modified_streamline = set_number_of_points(streamline, 3) >>> len(modified_streamline) 3  Multiple streamlines: >>> streamlines = [streamline, streamline[::2]] >>> new_streamlines = set_number_of_points(streamlines, 10) >>> [len(s) for s in streamlines] [100, 50] >>> [len(s) for s in new_streamlines] [10, 10]  ### time dipy.segment.clustering.time() → floating point number Return the current time in seconds since the Epoch. Fractions of a second may be present if the system clock provides them. ### applymask dipy.segment.mask.applymask(vol, mask) Mask vol with mask. Parameters: vol : ndarray Array with $$V$$ dimensions mask : ndarray Binary mask. Has $$M$$ dimensions where $$M <= V$$. When $$M < V$$, we append $$V - M$$ dimensions with axis length 1 to mask so that mask will broadcast against vol. In the typical case vol can be 4D, mask can be 3D, and we append a 1 to the mask shape which (via numpy broadcasting) has the effect of appling the 3D mask to each 3D slice in vol (vol[..., 0] to vol[..., -1). masked_vol : ndarray vol multiplied by mask where mask may have been extended to match extra dimensions in vol ### binary_dilation dipy.segment.mask.binary_dilation(input, structure=None, iterations=1, mask=None, output=None, border_value=0, origin=0, brute_force=False) Multi-dimensional binary dilation with the given structuring element. Parameters: input : array_like Binary array_like to be dilated. Non-zero (True) elements form the subset to be dilated. structure : array_like, optional Structuring element used for the dilation. Non-zero elements are considered True. If no structuring element is provided an element is generated with a square connectivity equal to one. iterations : {int, float}, optional The dilation is repeated iterations times (one, by default). If iterations is less than 1, the dilation is repeated until the result does not change anymore. mask : array_like, optional If a mask is given, only those elements with a True value at the corresponding mask element are modified at each iteration. output : ndarray, optional Array of the same shape as input, into which the output is placed. By default, a new array is created. border_value : int (cast to 0 or 1), optional Value at the border in the output array. origin : int or tuple of ints, optional Placement of the filter, by default 0. brute_force : boolean, optional Memory condition: if False, only the pixels whose value was changed in the last iteration are tracked as candidates to be updated (dilated) in the current iteration; if True all pixels are considered as candidates for dilation, regardless of what happened in the previous iteration. False by default. binary_dilation : ndarray of bools Dilation of the input by the structuring element. See also grey_dilation, binary_erosion, binary_closing, binary_opening, generate_binary_structure Notes Dilation [1] is a mathematical morphology operation [2] that uses a structuring element for expanding the shapes in an image. The binary dilation of an image by a structuring element is the locus of the points covered by the structuring element, when its center lies within the non-zero points of the image. References Examples >>> from scipy import ndimage >>> a = np.zeros((5, 5)) >>> a[2, 2] = 1 >>> a array([[ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> ndimage.binary_dilation(a) array([[False, False, False, False, False], [False, False, True, False, False], [False, True, True, True, False], [False, False, True, False, False], [False, False, False, False, False]], dtype=bool) >>> ndimage.binary_dilation(a).astype(a.dtype) array([[ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 1., 1., 1., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> # 3x3 structuring element with connectivity 1, used by default >>> struct1 = ndimage.generate_binary_structure(2, 1) >>> struct1 array([[False, True, False], [ True, True, True], [False, True, False]], dtype=bool) >>> # 3x3 structuring element with connectivity 2 >>> struct2 = ndimage.generate_binary_structure(2, 2) >>> struct2 array([[ True, True, True], [ True, True, True], [ True, True, True]], dtype=bool) >>> ndimage.binary_dilation(a, structure=struct1).astype(a.dtype) array([[ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 1., 1., 1., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> ndimage.binary_dilation(a, structure=struct2).astype(a.dtype) array([[ 0., 0., 0., 0., 0.], [ 0., 1., 1., 1., 0.], [ 0., 1., 1., 1., 0.], [ 0., 1., 1., 1., 0.], [ 0., 0., 0., 0., 0.]]) >>> ndimage.binary_dilation(a, structure=struct1,\ ... iterations=2).astype(a.dtype) array([[ 0., 0., 1., 0., 0.], [ 0., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1.], [ 0., 1., 1., 1., 0.], [ 0., 0., 1., 0., 0.]])  ### bounding_box dipy.segment.mask.bounding_box(vol) Compute the bounding box of nonzero intensity voxels in the volume. Parameters: vol : ndarray Volume to compute bounding box on. npmins : list Array containg minimum index of each dimension npmaxs : list Array containg maximum index of each dimension ### clean_cc_mask dipy.segment.mask.clean_cc_mask(mask) Cleans a segmentation of the corpus callosum so no random pixels are included. Parameters: mask : ndarray Binary mask of the coarse segmentation. new_cc_mask : ndarray Binary mask of the cleaned segmentation. ### color_fa dipy.segment.mask.color_fa(fa, evecs) Color fractional anisotropy of diffusion tensor Parameters: fa : array-like Array of the fractional anisotropy (can be 1D, 2D or 3D) evecs : array-like eigen vectors from the tensor model rgb : Array with 3 channels for each color as the last dimension. Colormap of the FA with red for the x value, y for the green value and z for the blue value. ec{e})) imes fa ### crop dipy.segment.mask.crop(vol, mins, maxs) Crops the input volume. Parameters: vol : ndarray Volume to crop. mins : array Array containg minimum index of each dimension. maxs : array Array containg maximum index of each dimension. vol : ndarray The cropped volume. ### fractional_anisotropy dipy.segment.mask.fractional_anisotropy(evals, axis=-1) Fractional anisotropy (FA) of a diffusion tensor. Parameters: evals : array-like Eigenvalues of a diffusion tensor. axis : int Axis of evals which contains 3 eigenvalues. fa : array Calculated FA. Range is 0 <= FA <= 1. Notes FA is calculated using the following equation: $FA = \sqrt{\frac{1}{2}\frac{(\lambda_1-\lambda_2)^2+(\lambda_1- \lambda_3)^2+(\lambda_2-\lambda_3)^2}{\lambda_1^2+ \lambda_2^2+\lambda_3^2}}$ ### generate_binary_structure dipy.segment.mask.generate_binary_structure(rank, connectivity) Generate a binary structure for binary morphological operations. Parameters: rank : int Number of dimensions of the array to which the structuring element will be applied, as returned by np.ndim. connectivity : int connectivity determines which elements of the output array belong to the structure, i.e. are considered as neighbors of the central element. Elements up to a squared distance of connectivity from the center are considered neighbors. connectivity may range from 1 (no diagonal elements are neighbors) to rank (all elements are neighbors). output : ndarray of bools Structuring element which may be used for binary morphological operations, with rank dimensions and all dimensions equal to 3. See also iterate_structure, binary_dilation, binary_erosion Notes generate_binary_structure can only create structuring elements with dimensions equal to 3, i.e. minimal dimensions. For larger structuring elements, that are useful e.g. for eroding large objects, one may either use iterate_structure, or create directly custom arrays with numpy functions such as numpy.ones. Examples >>> from scipy import ndimage >>> struct = ndimage.generate_binary_structure(2, 1) >>> struct array([[False, True, False], [ True, True, True], [False, True, False]], dtype=bool) >>> a = np.zeros((5,5)) >>> a[2, 2] = 1 >>> a array([[ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> b = ndimage.binary_dilation(a, structure=struct).astype(a.dtype) >>> b array([[ 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 1., 1., 1., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> ndimage.binary_dilation(b, structure=struct).astype(a.dtype) array([[ 0., 0., 1., 0., 0.], [ 0., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1.], [ 0., 1., 1., 1., 0.], [ 0., 0., 1., 0., 0.]]) >>> struct = ndimage.generate_binary_structure(2, 2) >>> struct array([[ True, True, True], [ True, True, True], [ True, True, True]], dtype=bool) >>> struct = ndimage.generate_binary_structure(3, 1) >>> struct # no diagonal elements array([[[False, False, False], [False, True, False], [False, False, False]], [[False, True, False], [ True, True, True], [False, True, False]], [[False, False, False], [False, True, False], [False, False, False]]], dtype=bool)  ### median_filter dipy.segment.mask.median_filter(input, size=None, footprint=None, output=None, mode='reflect', cval=0.0, origin=0) Calculate a multidimensional median filter. Parameters: input : array_like The input array. size : scalar or tuple, optional See footprint, below. Ignored if footprint is given. footprint : array, optional Either size or footprint must be defined. size gives the shape that is taken from the input array, at every element position, to define the input to the filter function. footprint is a boolean array that specifies (implicitly) a shape, but also which of the elements within this shape will get passed to the filter function. Thus size=(n,m) is equivalent to footprint=np.ones((n,m)). We adjust size to the number of dimensions of the input array, so that, if the input array is shape (10,10,10), and size is 2, then the actual size used is (2,2,2). When footprint is given, size is ignored. output : array or dtype, optional The array in which to place the output, or the dtype of the returned array. By default an array of the same dtype as input will be created. mode : str or sequence, optional The mode parameter determines how the input array is extended when the filter overlaps a border. By passing a sequence of modes with length equal to the number of dimensions of the input array, different modes can be specified along each axis. Default value is ‘reflect’. The valid values and their behavior is as follows: ‘reflect’ (d c b a | a b c d | d c b a) The input is extended by reflecting about the edge of the last pixel. ‘constant’ (k k k k | a b c d | k k k k) The input is extended by filling all values beyond the edge with the same constant value, defined by the cval parameter. ‘nearest’ (a a a a | a b c d | d d d d) The input is extended by replicating the last pixel. ‘mirror’ (d c b | a b c d | c b a) The input is extended by reflecting about the center of the last pixel. ‘wrap’ (a b c d | a b c d | a b c d) The input is extended by wrapping around to the opposite edge. cval : scalar, optional Value to fill past edges of input if mode is ‘constant’. Default is 0.0. origin : int or sequence, optional Controls the placement of the filter on the input array’s pixels. A value of 0 (the default) centers the filter over the pixel, with positive values shifting the filter to the left, and negative ones to the right. By passing a sequence of origins with length equal to the number of dimensions of the input array, different shifts can be specified along each axis. median_filter : ndarray Filtered array. Has the same shape as input. Examples >>> from scipy import ndimage, misc >>> import matplotlib.pyplot as plt >>> fig = plt.figure() >>> plt.gray() # show the filtered result in grayscale >>> ax1 = fig.add_subplot(121) # left side >>> ax2 = fig.add_subplot(122) # right side >>> ascent = misc.ascent() >>> result = ndimage.median_filter(ascent, size=20) >>> ax1.imshow(ascent) >>> ax2.imshow(result) >>> plt.show()  ### median_otsu dipy.segment.mask.median_otsu(input_volume, median_radius=4, numpass=4, autocrop=False, vol_idx=None, dilate=None) Simple brain extraction tool method for images from DWI data. It uses a median filter smoothing of the input_volumes vol_idx and an automatic histogram Otsu thresholding technique, hence the name median_otsu. This function is inspired from Mrtrix’s bet which has default values median_radius=3, numpass=2. However, from tests on multiple 1.5T and 3T data from GE, Philips, Siemens, the most robust choice is median_radius=4, numpass=4. Parameters: input_volume : ndarray ndarray of the brain volume median_radius : int Radius (in voxels) of the applied median filter (default: 4). numpass: int Number of pass of the median filter (default: 4). autocrop: bool, optional if True, the masked input_volume will also be cropped using the bounding box defined by the masked data. Should be on if DWI is upsampled to 1x1x1 resolution. (default: False). vol_idx : None or array, optional 1D array representing indices of axis=3 of a 4D input_volume None (the default) corresponds to (0,) (assumes first volume in 4D array). dilate : None or int, optional number of iterations for binary dilation maskedvolume : ndarray Masked input_volume mask : 3D ndarray The binary brain mask Notes Copyright (C) 2011, the scikit-image team All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of skimage nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ### multi_median dipy.segment.mask.multi_median(input, median_radius, numpass) Applies median filter multiple times on input data. Parameters: input : ndarray The input volume to apply filter on. median_radius : int Radius (in voxels) of the applied median filter numpass: int Number of pass of the median filter input : ndarray Filtered input volume. ### otsu dipy.segment.mask.otsu(image, nbins=256) Return threshold value based on Otsu’s method. Parameters: image : (N, M) ndarray Grayscale input image. nbins : int, optional Number of bins used to calculate histogram. This value is ignored for integer arrays. threshold : float Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. ValueError If image only contains a single grayscale value. Notes The input image must be grayscale. References Examples >>> from skimage.data import camera >>> image = camera() >>> thresh = threshold_otsu(image) >>> binary = image <= thresh  ### segment_from_cfa dipy.segment.mask.segment_from_cfa(tensor_fit, roi, threshold, return_cfa=False) Segment the cfa inside roi using the values from threshold as bounds. Parameters: tensor_fit : TensorFit object TensorFit object roi : ndarray A binary mask, which contains the bounding box for the segmentation. threshold : array-like An iterable that defines the min and max values to use for the thresholding. The values are specified as (R_min, R_max, G_min, G_max, B_min, B_max) return_cfa : bool, optional If True, the cfa is also returned. mask : ndarray Binary mask of the segmentation. cfa : ndarray, optional Array with shape = (…, 3), where … is the shape of tensor_fit. The color fractional anisotropy, ordered as a nd array with the last dimension of size 3 for the R, G and B channels. ### warn dipy.segment.mask.warn() Issue a warning, or maybe ignore it or raise an exception. ### ArcLengthFeature class dipy.segment.metric.ArcLengthFeature Bases: dipy.segment.featurespeed.CythonFeature Extracts features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). The feature being extracted consists of one scalar representing the arc length of the sequence (i.e. the sum of the length of all segments). Attributes: is_order_invariant Is this feature invariant to the sequence’s ordering Methods  extract Extracts features from a sequential datum. infer_shape Infers the shape of features extracted from a sequential datum. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

### AveragePointwiseEuclideanMetric

class dipy.segment.metric.AveragePointwiseEuclideanMetric

Bases: dipy.segment.metricspeed.SumPointwiseEuclideanMetric

Computes the average of pointwise Euclidean distances between two sequential data.

A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between the features, rather than directly between the sequential data.

Parameters: feature : Feature object, optional It is used to extract features before computing the distance.

Notes

The distance between two 2D sequential data:

s1       s2

0*   a    *0
\       |
\      |
1*     |
|  b  *1
|      \
2*      \
c    *2


is equal to $$(a+b+c)/3$$ where $$a$$ is the Euclidean distance between s1[0] and s2[0], $$b$$ between s1[1] and s2[1] and $$c$$ between s1[2] and s2[2].

Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering

Methods

 are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. ### CenterOfMassFeature class dipy.segment.metric.CenterOfMassFeature Bases: dipy.segment.featurespeed.CythonFeature Extracts features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). The feature being extracted consists of one N-dimensional point representing the mean of the points, i.e. the center of mass. Attributes: is_order_invariant Is this feature invariant to the sequence’s ordering Methods  extract Extracts features from a sequential datum. infer_shape Infers the shape of features extracted from a sequential datum. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

### CosineMetric

class dipy.segment.metric.CosineMetric

Bases: dipy.segment.metricspeed.CythonMetric

Computes the cosine distance between two vectors.

A vector (i.e. a N-dimensional point) is represented as a 2D array with shape (1, nb_dimensions).

Notes

The distance between two vectors $$v_1$$ and $$v_2$$ is equal to $$\frac{1}{\pi} \arccos\left(\frac{v_1 \cdot v_2}{\|v_1\| \|v_2\|}\right)$$ and is bounded within $$[0,1]$$.

Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering

Methods

 are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. ### EuclideanMetric dipy.segment.metric.EuclideanMetric alias of dipy.segment.metricspeed.SumPointwiseEuclideanMetric ### Feature class dipy.segment.metric.Feature Bases: object Extracts features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). Parameters: is_order_invariant : bool (optional) tells if this feature is invariant to the sequence’s ordering. This means starting from either extremities produces the same features. (Default: True) Notes When subclassing Feature, one only needs to override the extract and infer_shape methods. Attributes: is_order_invariant Is this feature invariant to the sequence’s ordering Methods  extract Extracts features from a sequential datum. infer_shape Infers the shape of features extracted from a sequential datum. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

extract()

Extracts features from a sequential datum.

Parameters: datum : 2D array Sequence of N-dimensional points. 2D array Features extracted from datum.
infer_shape()

Infers the shape of features extracted from a sequential datum.

Parameters: datum : 2D array Sequence of N-dimensional points. int, 1-tuple or 2-tuple Shape of the features.
is_order_invariant

Is this feature invariant to the sequence’s ordering

### IdentityFeature

class dipy.segment.metric.IdentityFeature

Bases: dipy.segment.featurespeed.CythonFeature

Extracts features from a sequential datum.

A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).

The features being extracted are the actual sequence’s points. This is useful for metric that does not require any pre-processing.

Attributes: is_order_invariant Is this feature invariant to the sequence’s ordering

Methods

 extract Extracts features from a sequential datum. infer_shape Infers the shape of features extracted from a sequential datum.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. ### Metric class dipy.segment.metric.Metric Bases: object Computes a distance between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between extracted features, rather than directly between the sequential data. Parameters: feature : Feature object, optional It is used to extract features before computing the distance. Notes When subclassing Metric, one only needs to override the dist and are_compatible methods. Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering Methods  are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

are_compatible()

Checks if features can be used by metric.dist based on their shape.

Basically this method exists so we don’t have to do this check inside the metric.dist function (speedup).

Parameters: shape1 : int, 1-tuple or 2-tuple shape of the first data point’s features shape2 : int, 1-tuple or 2-tuple shape of the second data point’s features are_compatible : bool whether or not shapes are compatible
dist()

Computes a distance between two data points based on their features.

Parameters: features1 : 2D array Features of the first data point. features2 : 2D array Features of the second data point. double Distance between two data points.
feature

Feature object used to extract features from sequential data

is_order_invariant

Is this metric invariant to the sequence’s ordering

### MidpointFeature

class dipy.segment.metric.MidpointFeature

Bases: dipy.segment.featurespeed.CythonFeature

Extracts features from a sequential datum.

A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).

The feature being extracted consists of one N-dimensional point representing the middle point of the sequence (i.e. nb_points//2th point).

Attributes: is_order_invariant Is this feature invariant to the sequence’s ordering

Methods

 extract Extracts features from a sequential datum. infer_shape Infers the shape of features extracted from a sequential datum.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. ### MinimumAverageDirectFlipMetric class dipy.segment.metric.MinimumAverageDirectFlipMetric Bases: dipy.segment.metricspeed.AveragePointwiseEuclideanMetric Computes the MDF distance (minimum average direct-flip) between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). Notes The distance between two 2D sequential data: s1 s2 0* a *0 \ | \ | 1* | | b *1 | \ 2* \ c *2  is equal to $$\min((a+b+c)/3, (a'+b'+c')/3)$$ where $$a$$ is the Euclidean distance between s1[0] and s2[0], $$b$$ between s1[1] and s2[1], $$c$$ between s1[2] and s2[2], $$a'$$ between s1[0] and s2[2], $$b'$$ between s1[1] and s2[1] and $$c'$$ between s1[2] and s2[0]. Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering Methods  are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

is_order_invariant

Is this metric invariant to the sequence’s ordering

### ResampleFeature

class dipy.segment.metric.ResampleFeature

Bases: dipy.segment.featurespeed.CythonFeature

Extracts features from a sequential datum.

A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).

The features being extracted are the points of the sequence once resampled. This is useful for metrics requiring a constant number of points for all

streamlines.
Attributes: is_order_invariant Is this feature invariant to the sequence’s ordering

Methods

 extract Extracts features from a sequential datum. infer_shape Infers the shape of features extracted from a sequential datum.
__init__($self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. ### SumPointwiseEuclideanMetric class dipy.segment.metric.SumPointwiseEuclideanMetric Bases: dipy.segment.metricspeed.CythonMetric Computes the sum of pointwise Euclidean distances between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between the features, rather than directly between the sequential data. Parameters: feature : Feature object, optional It is used to extract features before computing the distance. Notes The distance between two 2D sequential data: s1 s2 0* a *0 \ | \ | 1* | | b *1 | \ 2* \ c *2  is equal to $$a+b+c$$ where $$a$$ is the Euclidean distance between s1[0] and s2[0], $$b$$ between s1[1] and s2[1] and $$c$$ between s1[2] and s2[2]. Attributes: feature Feature object used to extract features from sequential data is_order_invariant Is this metric invariant to the sequence’s ordering Methods  are_compatible Checks if features can be used by metric.dist based on their shape. dist Computes a distance between two data points based on their features. __init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

### VectorOfEndpointsFeature

class dipy.segment.metric.VectorOfEndpointsFeature

Bases: dipy.segment.featurespeed.CythonFeature

Extracts features from a sequential datum.

A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).

The feature being extracted consists of one vector in the N-dimensional space pointing from one end-point of the sequence to the other (i.e. S[-1]-S[0]).

Attributes: is_order_invariant Is this feature invariant to the sequence’s ordering

Methods

 extract Extracts features from a sequential datum. infer_shape Infers the shape of features extracted from a sequential datum.
__init__(\$self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

### dist

dipy.segment.metric.dist()

Computes a distance between datum1 and datum2.

A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).

Parameters: metric : Metric object Tells how to compute the distance between datum1 and datum2. datum1 : 2D array Sequence of N-dimensional points. datum2 : 2D array Sequence of N-dimensional points. double Distance between two data points.

### distance_matrix

dipy.segment.metric.distance_matrix()

Computes the distance matrix between two lists of sequential data.

The distance matrix is obtained by computing the pairwise distance of all tuples spawn by the Cartesian product of data1 with data2. If data2 is not provided, the Cartesian product of data1 with itself is used instead. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).

Parameters: metric : Metric object Tells how to compute the distance between two sequential data. data1 : list of 2D arrays List of sequences of N-dimensional points. data2 : list of 2D arrays Llist of sequences of N-dimensional points. 2D array (double) Distance matrix.

### mdf

dipy.segment.metric.mdf(s1, s2)

Computes the MDF (Minimum average Direct-Flip) distance [Garyfallidis12] between two streamlines.

Streamlines must have the same number of points.

Parameters: s1 : 2D array A streamline (sequence of N-dimensional points). s2 : 2D array A streamline (sequence of N-dimensional points). double Distance between two streamlines.

References

 [Garyfallidis12] (1, 2, 3) Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.

### QuickBundles

class dipy.segment.quickbundles.QuickBundles(tracks, dist_thr=4.0, pts=12)

Bases: object

Attributes: centroids total_clusters

Methods

 remove_small_clusters(size) Remove clusters with small size
 clusters clusters_sizes downsampled_tracks exemplars label2cluster label2tracks label2tracksids partitions points_per_track remove_cluster remove_clusters remove_tracks virtuals
__init__(tracks, dist_thr=4.0, pts=12)

Highly efficient trajectory clustering [Garyfallidis12].

Parameters: tracks : sequence of (N,3) … (M,3) arrays trajectories (or tractography or streamlines) dist_thr : float distance threshold in the space of the tracks pts : int number of points for simplifying the tracks

References

 [Garyfallidis12] (1, 2) Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.

Methods

 clustering() returns a dict holding with the clustering result virtuals() gives the virtuals (track centroids) of the clusters exemplars() gives the exemplars (track medoids) of the clusters
centroids
clusters()
clusters_sizes()
downsampled_tracks()
exemplars(tracks=None)
label2cluster(id)
label2tracks(tracks, id)
label2tracksids(id)
partitions()
points_per_track()
remove_cluster(id)
remove_clusters(list_ids)
remove_small_clusters(size)

Remove clusters with small size

Parameters: size : int, threshold for minimum number of tracks allowed
remove_tracks()
total_clusters
virtuals()

### bundles_distances_mdf

dipy.segment.quickbundles.bundles_distances_mdf()

Calculate distances between list of tracks A and list of tracks B

All tracks need to have the same number of points

Parameters: tracksA : sequence of tracks as arrays, [(N,3) .. (N,3)] tracksB : sequence of tracks as arrays, [(N,3) .. (N,3)] DM : array, shape (len(tracksA), len(tracksB)) distances between tracksA and tracksB according to metric

dipy.metrics.downsample

### downsample

dipy.segment.quickbundles.downsample(xyz, n_pols=3)

downsample for a specific number of points along the curve/track

Uses the length of the curve. It works in a similar fashion to midpoint and arbitrarypoint but it also reduces the number of segments of a track.

Parameters: xyz : array-like shape (N,3) array representing x,y,z of N points in a track n_pol : int integer representing number of points (poles) we need along the curve. xyz2 : array shape (M,3) array representing x,y,z of M points that where extrapolated. M should be equal to n_pols

Examples

>>> import numpy as np
>>> # a semi-circle
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> xyz2=downsample(xyz,3)
>>> # a cosine
>>> x=np.pi*np.linspace(0,1,100)
>>> y=np.cos(theta)
>>> z=0*y
>>> xyz=np.vstack((x,y,z)).T
>>> _= downsample(xyz,3)
>>> len(xyz2)
3
>>> xyz3=downsample(xyz,10)
>>> len(xyz3)
10


### local_skeleton_clustering

dipy.segment.quickbundles.local_skeleton_clustering()

Efficient tractography clustering

Every track can needs to have the same number of points. Use dipy.tracking.metrics.downsample to restrict the number of points

Parameters: tracks : sequence of tracks as arrays, shape (N,3) .. (N,3) where N=points d_thr : float average euclidean distance threshold C : dict Clusters.

Notes

The distance calculated between two tracks:

t_1       t_2

0*   a    *0
\       |
\      |
1*     |
|  b  *1
|      \
2*      \
c    *2


is equal to $$(a+b+c)/3$$ where $$a$$ the euclidean distance between t_1[0] and t_2[0], $$b$$ between t_1[1] and t_2[1] and $$c$$ between t_1[2] and t_2[2]. Also the same with t2 flipped (so t_1[0] compared to t_2[2] etc).

Visualization:

It is possible to visualize the clustering C from the example above using the dipy.viz module:

from dipy.viz import window, actor
r=window.Renderer()
for c in C:
color=np.random.rand(3)
for i in C[c]['indices']:
window.show(r)


Examples

>>> tracks=[np.array([[0,0,0],[1,0,0,],[2,0,0]]),
...         np.array([[3,0,0],[3.5,1,0],[4,2,0]]),
...         np.array([[3.2,0,0],[3.7,1,0],[4.4,2,0]]),
...         np.array([[3.4,0,0],[3.9,1,0],[4.6,2,0]]),
...         np.array([[0,0.2,0],[1,0.2,0],[2,0.2,0]]),
...         np.array([[2,0.2,0],[1,0.2,0],[0,0.2,0]]),
...         np.array([[0,0,0],[0,1,0],[0,2,0]])]
>>> C = local_skeleton_clustering(tracks, d_thr=0.5)


### warn

dipy.segment.quickbundles.warn()

Issue a warning, or maybe ignore it or raise an exception.

### otsu

dipy.segment.threshold.otsu(image, nbins=256)

Return threshold value based on Otsu’s method. Copied from scikit-image to remove dependency.

Parameters: image : array Input image. nbins : int Number of bins used to calculate histogram. This value is ignored for integer arrays. threshold : float Threshold value.

### upper_bound_by_percent

dipy.segment.threshold.upper_bound_by_percent(data, percent=1)

Find the upper bound for visualization of medical images

Calculate the histogram of the image and go right to left until you find the bound that contains more than a percentage of the image.

Parameters: data : ndarray percent : float upper_bound : float

### upper_bound_by_rate

dipy.segment.threshold.upper_bound_by_rate(data, rate=0.05)

Adjusts upper intensity boundary using rates

It calculates the image intensity histogram, and based on the rate value it decide what is the upperbound value for intensity normalization, usually lower bound is 0. The rate is the ratio between the amount of pixels in every bins and the bins with highest pixel amount

Parameters: data : float Input intensity value data rate : float representing the threshold whether a spicific histogram bin that should be count in the normalization range high : float the upper_bound value for normalization

### ConstantObservationModel

class dipy.segment.tissue.ConstantObservationModel

Bases: object

Observation model assuming that the intensity of each class is constant. The model parameters are the means $$\mu_{k}$$ and variances $$\sigma_{k}$$ associated with each tissue class. According to this model, the observed intensity at voxel $$x$$ is given by $$I(x) = \mu_{k} + \eta_{k}$$ where $$k$$ is the tissue class of voxel $$x$$, and $$\eta_{k}$$ is a Gaussian random variable with zero mean and variance $$\sigma_{k}^{2}$$. The observation model is responsible for computing the negative log-likelihood of observing any given intensity $$z$$ at each voxel $$x$$ assuming the voxel belongs to each class $$k$$. It also provides a default parameter initialization.

Methods

 initialize_param_uniform Initializes the means and variances uniformly negloglikelihood Computes the gaussian negative log-likelihood of each class at each voxel of image assuming a gaussian distribution with means and variances given by mu and sigmasq, respectively (constant models along the full volume). prob_image Conditional probability of the label given the image seg_stats Mean and standard variation for N desired tissue classes update_param Updates the means and the variances in each iteration for all the labels. update_param_new Updates the means and the variances in each iteration for all the labels.
__init__()

Initializes an instance of the ConstantObservationModel class

initialize_param_uniform

Initializes the means and variances uniformly

The means are initialized uniformly along the dynamic range of image. The variances are set to 1 for all classes

Parameters: image : array, 3D structural image nclasses : int, number of desired classes mu : array, 1 x nclasses, mean for each class sigma : array, 1 x nclasses, standard deviation for each class. Set up to 1.0 for all classes.
negloglikelihood

Computes the gaussian negative log-likelihood of each class at each voxel of image assuming a gaussian distribution with means and variances given by mu and sigmasq, respectively (constant models along the full volume). The negative log-likelihood will be written in nloglike.

Parameters: image : ndarray, 3D gray scale structural image mu : ndarray, mean of each class sigmasq : ndarray, variance of each class nclasses : int number of classes nloglike : ndarray, 4D negloglikelihood for each class in each volume
prob_image

Conditional probability of the label given the image

Parameters: img : ndarray, 3D structural gray-scale image nclasses : int, number of tissue classes mu : ndarray, 1 x nclasses, current estimate of the mean of each tissue class sigmasq : ndarray, 1 x nclasses, current estimate of the variance of each tissue class P_L_N : ndarray, 4D probability map of the label given the neighborhood. Previously computed by function prob_neighborhood P_L_Y : ndarray, 4D probability of the label given the input image
seg_stats

Mean and standard variation for N desired tissue classes

Parameters: input_image : ndarray, 3D structural image seg_image : ndarray, 3D segmented image nclass : int, number of classes (3 in most cases) mu, std: ndarrays, 1 x nclasses dimension Mean and standard deviation for each class
update_param

Updates the means and the variances in each iteration for all the labels. This is for equations 25 and 26 of Zhang et. al., IEEE Trans. Med. Imag, Vol. 20, No. 1, Jan 2001.

Parameters: image : ndarray, 3D structural gray-scale image P_L_Y : ndarray, 4D probability map of the label given the input image computed by the expectation maximization (EM) algorithm mu : ndarray, 1 x nclasses, current estimate of the mean of each tissue class. nclasses : int, number of tissue classes mu_upd : ndarray, 1 x nclasses, updated mean of each tissue class var_upd : ndarray, 1 x nclasses, updated variance of each tissue class
update_param_new

Updates the means and the variances in each iteration for all the labels. This is for equations 25 and 26 of the Zhang et al. paper

Parameters: image : ndarray, 3D structural gray-scale image P_L_Y : ndarray, 4D probability map of the label given the input image computed by the expectation maximization (EM) algorithm mu : ndarray, 1 x nclasses, current estimate of the mean of each tissue class. nclasses : int, number of tissue classes mu_upd : ndarray, 1 x nclasses, updated mean of each tissue class var_upd : ndarray, 1 x nclasses, updated variance of each tissue class

### IteratedConditionalModes

class dipy.segment.tissue.IteratedConditionalModes

Bases: object

Methods

 icm_ising Executes one iteration of the ICM algorithm for MRF MAP estimation. initialize_maximum_likelihood Initializes the segmentation of an image with given prob_neighborhood Conditional probability of the label given the neighborhood Equation 2.18 of the Stan Z.
__init__()
icm_ising

Executes one iteration of the ICM algorithm for MRF MAP estimation. The prior distribution of the MRF is a Gibbs distribution with the Potts/Ising model with parameter beta:

https://en.wikipedia.org/wiki/Potts_model

Parameters: nloglike : ndarray, 4D shape, nloglike[x,y,z,k] is the negative log likelihood of class k at voxel (x,y,z) beta : float, positive scalar, it is the parameter of the Potts/Ising model. Determines the smoothness of the output segmentation. seg : ndarray, 3D initial segmentation. This segmentation will change by one iteration of the ICM algorithm new_seg : ndarray, 3D final segmentation energy : ndarray, 3D final energy
initialize_maximum_likelihood
Initializes the segmentation of an image with given
neg-loglikelihood

Initializes the segmentation of an image with neglog-likelihood field given by nloglike. The class of each voxel is selected as the one with the minimum neglog-likelihood (i.e. maximum-likelihood segmentation).

Parameters: nloglike : ndarray, 4D shape, nloglike[x,y,z,k] is the likelihhood of class k for voxel (x, y, z) seg : ndarray, 3D initial segmentation
prob_neighborhood

Conditional probability of the label given the neighborhood Equation 2.18 of the Stan Z. Li book (Stan Z. Li, Markov Random Field Modeling in Image Analysis, 3rd ed., Advances in Pattern Recognition Series, Springer Verlag 2009.)

Parameters: seg : ndarray, 3D tissue segmentation derived from the ICM model beta : float, scalar that determines the importance of the neighborhood and the spatial smoothness of the segmentation. Usually between 0 to 0.5 nclasses : int, number of tissue classes PLN : ndarray, 4D probability map of the label given the neighborhood of the voxel.

### TissueClassifierHMRF

class dipy.segment.tissue.TissueClassifierHMRF(save_history=False, verbose=True)

Bases: object

This class contains the methods for tissue classification using the Markov Random Fields modeling approach

Methods

 classify(image, nclasses, beta[, tolerance, …]) This method uses the Maximum a posteriori - Markov Random Field approach for segmentation by using the Iterative Conditional Modes and Expectation Maximization to estimate the parameters.
__init__(save_history=False, verbose=True)

Initialize self. See help(type(self)) for accurate signature.

classify(image, nclasses, beta, tolerance=None, max_iter=None)

This method uses the Maximum a posteriori - Markov Random Field approach for segmentation by using the Iterative Conditional Modes and Expectation Maximization to estimate the parameters.

Parameters: image : ndarray, 3D structural image. nclasses : int, number of desired classes. beta : float, smoothing parameter, the higher this number the smoother the output will be. tolerance: float, value that defines the percentage of change tolerated to prevent the ICM loop to stop. Default is 1e-05. max_iter : float, fixed number of desired iterations. Default is 100. If the user only specifies this parameter, the tolerance value will not be considered. If none of these two parameters initial_segmentation : ndarray, 3D segmented image with all tissue types specified in nclasses. final_segmentation : ndarray, 3D final refined segmentation containing all tissue types. PVE : ndarray, 3D probability map of each tissue type.

dipy.segment.tissue.add_noise(signal, snr, S0, noise_type='rician')

Add noise of specified distribution to the signal from a single voxel.

Parameters: signal : 1-d ndarray The signal in the voxel. snr : float The desired signal-to-noise ratio. (See notes below.) If snr is None, return the signal as-is. S0 : float Reference signal for specifying snr. noise_type : string, optional The distribution of noise added. Can be either ‘gaussian’ for Gaussian distributed noise, ‘rician’ for Rice-distributed noise (default) or ‘rayleigh’ for a Rayleigh distribution. signal : array, same shape as the input Signal with added noise.

Notes

SNR is defined here, following [1], as S0 / sigma, where sigma is the standard deviation of the two Gaussian distributions forming the real and imaginary components of the Rician noise distribution (see [2]).

References

 [1] (1, 2) Descoteaux, Angelino, Fitzgibbons and Deriche (2007) Regularized, fast and robust q-ball imaging. MRM, 58: 497-510
 [2] (1, 2) Gudbjartson and Patz (2008). The Rician distribution of noisy MRI data. MRM 34: 910-914.

Examples

>>> signal = np.arange(800).reshape(2, 2, 2, 100)
>>> signal_w_noise = add_noise(signal, 10., 100., noise_type='rician')