segment
segment.benchmarks
segment.benchmarks.bench_quickbundles
segment.bundles
segment.clustering
segment.mask
segment.metric
segment.threshold
segment.tissue
MDFpy
Metric
QB_New
Streamlines
BundleMinDistanceAsymmetricMetric
BundleMinDistanceMetric
BundleSumDistanceMatrixMetric
RecoBundles
StreamlineLinearRegistration
Streamlines
chain
ABCMeta
AveragePointwiseEuclideanMetric
Cluster
ClusterCentroid
ClusterMap
ClusterMapCentroid
Clustering
Identity
Metric
MinimumAverageDirectFlipMetric
QuickBundles
QuickBundlesX
ResampleFeature
TreeCluster
TreeClusterMap
ArcLengthFeature
AveragePointwiseEuclideanMetric
CenterOfMassFeature
CosineMetric
EuclideanMetric
Feature
IdentityFeature
Metric
MidpointFeature
MinimumAverageDirectFlipMetric
ResampleFeature
SumPointwiseEuclideanMetric
VectorOfEndpointsFeature
ConstantObservationModel
IteratedConditionalModes
TissueClassifierHMRF
segment
segment.benchmarks.bench_quickbundles
Benchmarks for QuickBundles
Run all benchmarks with:
import dipy.segment as dipysegment
dipysegment.bench()
With Pytest, Run this benchmark with:
pytest svv c bench.ini /path/to/bench_quickbundles.py


Computes a distance between two sequential data. 

alias of 

alias of 


Raises an AssertionError if two array_like objects are not equal. 



Raises an AssertionError if two objects are not equal. 

Provide full paths to example or test datasets. 

Load the stateful tractogram from any format (trk, tck, vtk, fib, dpy) 

Return elapsed time for executing code in the namespace of the caller. 
Change the number of points of streamlines 
segment.bundles

Asymmetric Bundlebased Minimum distance 

Bundlebased Minimum Distance aka BMD 

Bundlebased Sum Distance aka BMD 

Methods 

Methods 
alias of 

chain(*iterables) –> chain object 


Apply affine matrix aff to points pts 

Find bundle adjacency between two given tracks/bundles 
Calculate distances between list of tracks A and list of tracks B 

Calculate distances between list of tracks A and list of tracks B 




Euclidean length of streamlines 



Run QuickBundlesX and then run again on the centroids of the last layer 

Select a random set of streamlines 
Change the number of points of streamlines 


Return the current time in seconds since the Epoch. 
segment.clustering
Metaclass for defining Abstract Base Classes (ABCs). 

Computes the average of pointwise Euclidean distances between two sequential data. 


Provides functionalities for interacting with a cluster. 

Provides functionalities for interacting with a cluster. 

Provides functionalities for interacting with clustering outputs. 

Provides functionalities for interacting with clustering outputs that have centroids. 
Methods 

Provides identity indexing functionality. 

Computes a distance between two sequential data. 

Computes the MDF distance (minimum average directflip) between two sequential data. 


Clusters streamlines using QuickBundles [R2491d57df3a8Garyfallidis12]. 

Clusters streamlines using QuickBundlesX. 
Extracts features from a sequential datum. 






A decorator indicating abstract methods. 



Run QuickBundlesX and then run again on the centroids of the last layer 
Change the number of points of streamlines 


Return the current time in seconds since the Epoch. 
segment.mask

Mask vol with mask. 

Multidimensional binary dilation with the given structuring element. 

Compute the bounding box of nonzero intensity voxels in the volume. 

Cleans a segmentation of the corpus callosum so no random pixels are included. 

Color fractional anisotropy of diffusion tensor 

Crops the input volume. 

Fractional anisotropy (FA) of a diffusion tensor. 

Generate a binary structure for binary morphological operations. 

Calculate a multidimensional median filter. 

Simple brain extraction tool method for images from DWI data. 

Applies median filter multiple times on input data. 

Return threshold value based on Otsu’s method. 

Segment the cfa inside roi using the values from threshold as bounds. 

Issue a warning, or maybe ignore it or raise an exception. 
segment.metric
Extracts features from a sequential datum. 

Computes the average of pointwise Euclidean distances between two sequential data. 

Extracts features from a sequential datum. 

Computes the cosine distance between two vectors. 

alias of 

Extracts features from a sequential datum. 

Extracts features from a sequential datum. 

Computes a distance between two sequential data. 

Extracts features from a sequential datum. 

Computes the MDF distance (minimum average directflip) between two sequential data. 

Extracts features from a sequential datum. 

Computes the sum of pointwise Euclidean distances between two sequential data. 

Extracts features from a sequential datum. 


Computes a distance between datum1 and datum2. 
Computes the distance matrix between two lists of sequential data. 


Computes the MDF (Minimum average DirectFlip) distance [Garyfallidis12] between two streamlines. 
segment.threshold

Return threshold value based on Otsu’s method. 

Find the upper bound for visualization of medical images 

Adjusts upper intensity boundary using rates 
segment.tissue
Observation model assuming that the intensity of each class is constant. 

Methods 


This class contains the methods for tissue classification using the Markov Random Fields modeling approach 

Add noise of specified distribution to the signal from a single voxel. 
MDFpy
dipy.segment.benchmarks.bench_quickbundles.
MDFpy
Bases: dipy.segment.metricspeed.Metric
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods

Checks if features can be used by metric.dist based on their shape. 

Computes a distance between two data points based on their features. 
are_compatible
(self, shape1, shape2)Checks if features can be used by metric.dist based on their shape.
Basically this method exists so we don’t have to do this check inside the metric.dist function (speedup).
shape of the first data point’s features
shape of the second data point’s features
whether or not shapes are compatible
Metric
dipy.segment.benchmarks.bench_quickbundles.
Metric
Bases: object
Computes a distance between two sequential data.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between extracted features, rather than directly between the sequential data.
It is used to extract features before computing the distance.
Notes
When subclassing Metric, one only needs to override the dist and are_compatible methods.
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods
Checks if features can be used by metric.dist based on their shape. 


Computes a distance between two data points based on their features. 
are_compatible
()Checks if features can be used by metric.dist based on their shape.
Basically this method exists so we don’t have to do this check inside the metric.dist function (speedup).
shape of the first data point’s features
shape of the second data point’s features
whether or not shapes are compatible
QB_New
dipy.segment.benchmarks.bench_quickbundles.
QB_New
alias of dipy.segment.clustering.QuickBundles
dipy.segment.benchmarks.bench_quickbundles.
assert_array_equal
(x, y, err_msg='', verbose=True)Raises an AssertionError if two array_like objects are not equal.
Given two array_like objects, check that the shape is equal and all elements of these objects are equal. An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions.
The usual caution for verifying equality with floating point numbers is advised.
The actual object to check.
The desired, expected object.
The error message to be printed in case of failure.
If True, the conflicting values are appended to the error message.
If actual and desired objects are not equal.
See also
assert_allclose
Compare two array_like objects for equality with desired relative and/or absolute precision.
assert_array_almost_equal_nulp
, assert_array_max_ulp
, assert_equal
Examples
The first assert does not raise an exception:
>>> np.testing.assert_array_equal([1.0,2.33333,np.nan],
... [np.exp(0),2.33333, np.nan])
Assert fails with numerical inprecision with floats:
>>> np.testing.assert_array_equal([1.0,np.pi,np.nan],
... [1, np.sqrt(np.pi)**2, np.nan])
Traceback (most recent call last):
...
AssertionError:
Arrays are not equal
Mismatch: 33.3%
Max absolute difference: 4.4408921e16
Max relative difference: 1.41357986e16
x: array([1. , 3.141593, nan])
y: array([1. , 3.141593, nan])
Use assert_allclose or one of the nulp (number of floating point values) functions for these cases instead:
>>> np.testing.assert_allclose([1.0,np.pi,np.nan],
... [1, np.sqrt(np.pi)**2, np.nan],
... rtol=1e10, atol=0)
dipy.segment.benchmarks.bench_quickbundles.
assert_equal
(actual, desired, err_msg='', verbose=True)Raises an AssertionError if two objects are not equal.
Given two objects (scalars, lists, tuples, dictionaries or numpy arrays), check that all elements of these objects are equal. An exception is raised at the first conflicting values.
This function handles NaN comparisons as if NaN was a “normal” number. That is, no assertion is raised if both objects have NaNs in the same positions. This is in contrast to the IEEE standard on NaNs, which says that NaN compared to anything must return False.
The object to check.
The expected object.
The error message to be printed in case of failure.
If True, the conflicting values are appended to the error message.
If actual and desired are not equal.
Examples
>>> np.testing.assert_equal([4,5], [4,6])
Traceback (most recent call last):
...
AssertionError:
Items are not equal:
item=1
ACTUAL: 5
DESIRED: 6
The following comparison does not raise an exception. There are NaNs in the inputs, but they are in the same positions.
>>> np.testing.assert_equal(np.array([1.0, 2.0, np.nan]), [1, 2, np.nan])
dipy.segment.benchmarks.bench_quickbundles.
get_fnames
(name='small_64D')Provide full paths to example or test datasets.
the filename/s of which dataset to return, one of: ‘small_64D’ small region of interest nifti,bvecs,bvals 64 directions ‘small_101D’ small region of interest nifti,bvecs,bvals 101 directions ‘aniso_vox’ volume with anisotropic voxel size as Nifti ‘fornix’ 300 tracks in Trackvis format (from Pittsburgh
Brain Competition)
of 101 directions tested on Siemens 3T Trio
‘small_25’ small ROI (10x8x2) DTI data (b value 2000, 25 directions) ‘test_piesno’ slice of N=8, K=14 diffusion data ‘reg_c’ small 2D image used for validating registration ‘reg_o’ small 2D image used for validation registration ‘cb_2’ two vectorized cingulum bundles
filenames for dataset
Examples
>>> import numpy as np
>>> from dipy.io.image import load_nifti
>>> from dipy.data import get_fnames
>>> fimg, fbvals, fbvecs = get_fnames('small_101D')
>>> bvals=np.loadtxt(fbvals)
>>> bvecs=np.loadtxt(fbvecs).T
>>> data, affine = load_nifti(fimg)
>>> data.shape == (6, 10, 10, 102)
True
>>> bvals.shape == (102,)
True
>>> bvecs.shape == (102, 3)
True
dipy.segment.benchmarks.bench_quickbundles.
load_tractogram
(filename, reference, to_space=<Space.RASMM: 'rasmm'>, to_origin=<Origin.NIFTI: 'center'>, bbox_valid_check=True, trk_header_check=True)Load the stateful tractogram from any format (trk, tck, vtk, fib, dpy)
Filename with valid extension
trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a niftirelated object from the native diffusion used for streamlines generation
Space to which the streamlines will be transformed after loading
NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)
Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.
Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded
The tractogram to load (must have been saved properly)
dipy.segment.benchmarks.bench_quickbundles.
measure
(code_str, times=1, label=None)Return elapsed time for executing code in the namespace of the caller.
The supplied code string is compiled with the Python builtin compile
.
The precision of the timing is 10 milliseconds. If the code will execute
fast on this timescale, it can be executed many times to get reasonable
timing accuracy.
The code to be timed.
The number of times the code is executed. Default is 1. The code is only compiled once.
A label to identify code_str with. This is passed into compile
as the second argument (for runtime error messages).
Total elapsed time in seconds for executing code_str times times.
Examples
>>> times = 10
>>> etime = np.testing.measure('for i in range(1000): np.sqrt(i**2)', times=times)
>>> print("Time for a single execution : ", etime / times, "s")
Time for a single execution : 0.005 s
dipy.segment.benchmarks.bench_quickbundles.
set_number_of_points
()(either by downsampling or upsampling)
Change the number of points of streamlines in order to obtain nb_points1 segments of equal length. Points of streamlines will be modified along the curve.
dipy.tracking.Streamlines
If ndarray, must have shape (N,3) where N is the number of points
of the streamline.
If list, each item must be ndarray shape (Ni,3) where Ni is the number
of points of streamline i.
If dipy.tracking.Streamlines
, its common_shape must be 3.
integer representing number of points wanted along the curve.
dipy.tracking.Streamlines
Results of the downsampling or upsampling process.
Examples
>>> from dipy.tracking.streamline import set_number_of_points
>>> import numpy as np
One streamline, a semicircle:
>>> theta = np.pi*np.linspace(0, 1, 100)
>>> x = np.cos(theta)
>>> y = np.sin(theta)
>>> z = 0 * x
>>> streamline = np.vstack((x, y, z)).T
>>> modified_streamline = set_number_of_points(streamline, 3)
>>> len(modified_streamline)
3
Multiple streamlines:
>>> streamlines = [streamline, streamline[::2]]
>>> new_streamlines = set_number_of_points(streamlines, 10)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in new_streamlines]
[10, 10]
BundleMinDistanceAsymmetricMetric
dipy.segment.bundles.
BundleMinDistanceAsymmetricMetric
(num_threads=None)Bases: dipy.align.streamlinear.BundleMinDistanceMetric
Asymmetric Bundlebased Minimum distance
This is a cost function that can be used by the StreamlineLinearRegistration class.
Methods

Distance calculated from this Metric 

Setup static and moving sets of streamlines 
__init__
(self, num_threads=None)An abstract class for the metric used for streamline registration
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
BundleMinDistanceMetric
dipy.segment.bundles.
BundleMinDistanceMetric
(num_threads=None)Bases: dipy.align.streamlinear.StreamlineDistanceMetric
Bundlebased Minimum Distance aka BMD
This is the cost function used by the StreamlineLinearRegistration
References
Garyfallidis et al., “Direct nativespace fiber bundle alignment for group comparisons”, ISMRM, 2014.
Methods
setup(static, moving) 

distance(xopt) 
__init__
(self, num_threads=None)An abstract class for the metric used for streamline registration
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
distance
(self, xopt)Distance calculated from this Metric
List of affine parameters as an 1D vector,
setup
(self, static, moving)Setup static and moving sets of streamlines
Fixed or reference set of streamlines.
Moving streamlines.
Number of threads. If None (default) then all available threads will be used.
Notes
Call this after the object is initiated and before distance.
BundleSumDistanceMatrixMetric
dipy.segment.bundles.
BundleSumDistanceMatrixMetric
(num_threads=None)Bases: dipy.align.streamlinear.BundleMinDistanceMatrixMetric
Bundlebased Sum Distance aka BMD
This is a cost function that can be used by the StreamlineLinearRegistration class.
Notes
The difference with BundleMinDistanceMatrixMetric is that it uses uses the sum of the distance matrix and not the sum of mins.
Methods
setup(static, moving) 

distance(xopt) 
__init__
(self, num_threads=None)An abstract class for the metric used for streamline registration
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
RecoBundles
dipy.segment.bundles.
RecoBundles
(streamlines, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=False)Bases: object
Methods

Compare the similiarity between two given bundles, model bundle, and extracted bundle. 

Recognize the model_bundle in self.streamlines 

Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input. 
__init__
(self, streamlines, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=False)Recognition of bundles
Extract bundles from a participants’ tractograms using model bundles segmented from a different subject or an atlas of bundles. See [Garyfallidis17] for the details.
The tractogram in which you want to recognize bundles.
Keep streamlines that have length greater than this value (default 50)
Keep streamlines have length less than this value (default 1000000)
Provide existing clustering to start RB faster (default None).
Distance threshold in mm for clustering streamlines. Default: 15.
Number of points per streamline (default 20)
If None define RandomState in initialization function. Default: None
If True, log information.
Notes
Make sure that before creating this class that the streamlines and the model bundles are roughly in the same space. Also default thresholds are assumed in RAS 1mm^3 space. You may want to adjust those if your streamlines are not in world coordinates.
References
Garyfallidis et al. Recognition of white matter bundles using local and global streamlinebased registration and clustering, Neuroimage, 2017.
evaluate_results
(self, model_bundle, pruned_streamlines, slr_select)Compare the similiarity between two given bundles, model bundle, and extracted bundle.
Select the number of streamlines from model to neirborhood of model to perform the local SLR.
bundle adjacency value between model bundle and pruned bundle
bundle minimum distance value between model bundle and pruned bundle
recognize
(self, model_bundle, model_clust_thr, reduction_thr=10, reduction_distance='mdf', slr=True, slr_num_threads=None, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method='LBFGSB', pruning_thr=5, pruning_distance='mdf')Recognize the model_bundle in self.streamlines
mdf or mam (default mdf)
Use Streamlinebased Linear Registration (SLR) locally (default True)
(default None)
(default None)
Select the number of streamlines from model to neirborhood of model to perform the local SLR.
Optimization method (default ‘LBFGSB’)
MDF (‘mdf’) and MAM (‘mam’)
Recognized bundle in the space of the model tractogram
Indices of recognized bundle in the original tractogram
References
Garyfallidis et al. Recognition of white matter bundles using local and global streamlinebased registration and clustering, Neuroimage, 2017.
refine
(self, model_bundle, pruned_streamlines, model_clust_thr, reduction_thr=14, reduction_distance='mdf', slr=True, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method='LBFGSB', pruning_thr=6, pruning_distance='mdf')Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input. It refines the first ouput of recobundle by applying second local slr (optional), and second pruning. This method is useful when we are dealing with noisy data or when we want to extract small tracks from tractograms.
mdf or mam (default mam)
Use Streamlinebased Linear Registration (SLR) locally (default True)
(default None)
(default None)
Select the number of streamlines from model to neirborhood of model to perform the local SLR.
Optimization method (default ‘LBFGSB’)
MDF (‘mdf’) and MAM (‘mam’)
Recognized bundle in the space of the model tractogram
Indices of recognized bundle in the original tractogram
References
Garyfallidis et al. Recognition of white matter bundles using local and global streamlinebased registration and clustering, Neuroimage, 2017.
StreamlineLinearRegistration
dipy.segment.bundles.
StreamlineLinearRegistration
(metric=None, x0='rigid', method='LBFGSB', bounds=None, verbose=False, options=None, evolution=False, num_threads=None)Bases: object
Methods

Find the minimum of the provided metric. 
__init__
(self, metric=None, x0='rigid', method='LBFGSB', bounds=None, verbose=False, options=None, evolution=False, num_threads=None)Linear registration of 2 sets of streamlines [Garyfallidis15].
If None and fast is False then the BMD distance is used. If fast is True then a faster implementation of BMD is used. Otherwise, use the given distance metric.
Initial parametrization for the optimization.
a) 6 elements then only rigid registration is performed with the 3 first elements for translation and 3 for rotation. b) 7 elements also isotropic scaling is performed (similarity). c) 12 elements then translation, rotation (in degrees), scaling and shearing is performed (affine).
Here is an example of x0 with 12 elements:
x0=np.array([0, 10, 0, 40, 0, 0, 2., 1.5, 1, 0.1, 0.5, 0])
This has translation (0, 10, 0), rotation (40, 0, 0) in degrees, scaling (2., 1.5, 1) and shearing (0.1, 0.5, 0).
x0 = np.array([0, 0, 0, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0, 1.])
x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0, 1.])
x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])
‘L_BFGS_B’ or ‘Powell’ optimizers can be used. Default is ‘L_BFGS_B’.
If method == ‘L_BFGS_B’ then we can use bounded optimization. For example for the six parameters of rigid rotation we can set the bounds = [(30, 30), (30, 30), (30, 30),
(45, 45), (45, 45), (45, 45)]
That means that we have set the bounds for the three translations and three rotation axes (in degrees).
If True, if True then information about the optimization is shown. Default: False.
Extra options to be used with the selected method.
If True save the transformation for each iteration of the optimizer. Default is False. Supported only with Scipy >= 0.11.
Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
References
Garyfallidis et al. “Robust and efficient linear registration of whitematter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015
Garyfallidis et al., “Direct nativespace fiber bundle alignment for group comparisons”, ISMRM, 2014.
Garyfallidis et al. Recognition of white matter bundles using local and global streamlinebased registration and clustering, Neuroimage, 2017.
optimize
(self, static, moving, mat=None)Find the minimum of the provided metric.
Reference or fixed set of streamlines.
Moving set of streamlines.
Transformation (4, 4) matrix to start the registration. mat
is applied to moving. Default value None which means that initial
transformation will be generated by shifting the centers of moving
and static sets of streamlines to the origin.
chain
dipy.segment.bundles.
chain
Bases: object
chain(*iterables) –> chain object
Return a chain object whose .__next__() method returns elements from the first iterable until it is exhausted, then elements from the next iterable, until all of the iterables are exhausted.
Methods
chain.from_iterable(iterable) –> chain object 
dipy.segment.bundles.
apply_affine
(aff, pts)Apply affine matrix aff to points pts
Returns result of application of aff to the right of pts. The coordinate dimension of pts should be the last.
For the 3D case, aff will be shape (4,4) and pts will have final axis length 3  maybe it will just be N by 3. The return value is the transformed points, in this case:
res = np.dot(aff[:3,:3], pts.T) + aff[:3,3:4]
transformed_pts = res.T
This routine is more general than 3D, in that aff can have any shape (N,N), and pts can have any shape, as long as the last dimension is for the coordinates, and is therefore length N1.
Homogenous affine, for 3D points, will be 4 by 4. Contrary to first appearance, the affine will be applied on the left of pts.
Points, where the last dimension contains the coordinates of each point. For 3D, the last dimension will be length 3.
transformed points
Examples
>>> aff = np.array([[0,2,0,10],[3,0,0,11],[0,0,4,12],[0,0,0,1]])
>>> pts = np.array([[1,2,3],[2,3,4],[4,5,6],[6,7,8]])
>>> apply_affine(aff, pts)
array([[14, 14, 24],
[16, 17, 28],
[20, 23, 36],
[24, 29, 44]]...)
Just to show that in the simple 3D case, it is equivalent to:
>>> (np.dot(aff[:3,:3], pts.T) + aff[:3,3:4]).T
array([[14, 14, 24],
[16, 17, 28],
[20, 23, 36],
[24, 29, 44]]...)
But pts can be a more complicated shape:
>>> pts = pts.reshape((2,2,3))
>>> apply_affine(aff, pts)
array([[[14, 14, 24],
[16, 17, 28]],
[[20, 23, 36],
[24, 29, 44]]]...)
dipy.segment.bundles.
bundle_adjacency
(dtracks0, dtracks1, threshold)Find bundle adjacency between two given tracks/bundles
dtracks1 : Streamlines threshold: float
tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
dipy.segment.bundles.
bundles_distances_mam
()Calculate distances between list of tracks A and list of tracks B
of tracks as arrays, shape (N1,3) .. (Nm,3)
of tracks as arrays, shape (N1,3) .. (Nm,3)
‘avg’, ‘min’, ‘max’
distances between tracksA and tracksB according to metric
dipy.segment.bundles.
bundles_distances_mdf
()Calculate distances between list of tracks A and list of tracks B
All tracks need to have the same number of points
of tracks as arrays, [(N,3) .. (N,3)]
of tracks as arrays, [(N,3) .. (N,3)]
distances between tracksA and tracksB according to metric
See also
dipy.metrics.downsample
dipy.segment.bundles.
length
()Euclidean length of streamlines
Length is in mm only if streamlines are expressed in world coordinates.
dipy.tracking.Streamlines
If ndarray, must have shape (N,3) where N is the number of points
of the streamline.
If list, each item must be ndarray shape (Ni,3) where Ni is the number
of points of streamline i.
If dipy.tracking.Streamlines
, its common_shape must be 3.
If there is only one streamline, a scalar representing the length of the streamline. If there are several streamlines, ndarray containing the length of every streamline.
Examples
>>> from dipy.tracking.streamline import length
>>> import numpy as np
>>> streamline = np.array([[1, 1, 1], [2, 3, 4], [0, 0, 0]])
>>> expected_length = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2]).sum()
>>> length(streamline) == expected_length
True
>>> streamlines = [streamline, np.vstack([streamline, streamline[::1]])]
>>> expected_lengths = [expected_length, 2*expected_length]
>>> lengths = [length(streamlines[0]), length(streamlines[1])]
>>> np.allclose(lengths, expected_lengths)
True
>>> length([])
0.0
>>> length(np.array([[1, 2, 3]]))
0.0
dipy.segment.bundles.
qbx_and_merge
(streamlines, thresholds, nb_pts=20, select_randomly=None, rng=None, verbose=False)Run QuickBundlesX and then run again on the centroids of the last layer
Running again QuickBundles at a layer has the effect of merging some of the clusters that maybe originally devided because of branching. This function help obtain a result at a QuickBundles quality but with QuickBundlesX speed. The merging phase has low cost because it is applied only on the centroids rather than the entire dataset.
List of distance thresholds for QuickBundlesX.
Number of points for discretizing each streamline
Randomly select a specific number of streamlines. If None all the streamlines are used.
If None then RandomState is initialized internally.
If True, log information. Default False.
Contains the clusters of the last layer of QuickBundlesX after merging.
References
Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
Garyfallidis E. et al. QuickBundlesX: Sequential clustering of millions of streamlines in multiple levels of detail at record execution time. Proceedings of the, International Society of Magnetic Resonance in Medicine (ISMRM). Singapore, 4187, 2016.
dipy.segment.bundles.
select_random_set_of_streamlines
(streamlines, select, rng=None)Select a random set of streamlines
Object of 2D ndarrays of shape[1]==3
Number of streamlines to select. If there are less streamlines
than select
then select=len(streamlines)
.
Default None.
Notes
The same streamline will not be selected twice.
dipy.segment.bundles.
set_number_of_points
()(either by downsampling or upsampling)
Change the number of points of streamlines in order to obtain nb_points1 segments of equal length. Points of streamlines will be modified along the curve.
dipy.tracking.Streamlines
If ndarray, must have shape (N,3) where N is the number of points
of the streamline.
If list, each item must be ndarray shape (Ni,3) where Ni is the number
of points of streamline i.
If dipy.tracking.Streamlines
, its common_shape must be 3.
integer representing number of points wanted along the curve.
dipy.tracking.Streamlines
Results of the downsampling or upsampling process.
Examples
>>> from dipy.tracking.streamline import set_number_of_points
>>> import numpy as np
One streamline, a semicircle:
>>> theta = np.pi*np.linspace(0, 1, 100)
>>> x = np.cos(theta)
>>> y = np.sin(theta)
>>> z = 0 * x
>>> streamline = np.vstack((x, y, z)).T
>>> modified_streamline = set_number_of_points(streamline, 3)
>>> len(modified_streamline)
3
Multiple streamlines:
>>> streamlines = [streamline, streamline[::2]]
>>> new_streamlines = set_number_of_points(streamlines, 10)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in new_streamlines]
[10, 10]
ABCMeta
dipy.segment.clustering.
ABCMeta
Bases: type
Metaclass for defining Abstract Base Classes (ABCs).
Use this metaclass to create an ABC. An ABC can be subclassed directly, and then acts as a mixin class. You can also register unrelated concrete classes (even builtin classes) and unrelated ABCs as ‘virtual subclasses’ – these and their descendants will be considered subclasses of the registering ABC by the builtin issubclass() function, but the registering ABC won’t show up in their MRO (Method Resolution Order) nor will method implementations defined by the registering ABC be callable (not even via super()).
Methods

Call self as a function. 

Return a type’s method resolution order. 

Register a virtual subclass of an ABC. 
AveragePointwiseEuclideanMetric
dipy.segment.clustering.
AveragePointwiseEuclideanMetric
Bases: dipy.segment.metricspeed.SumPointwiseEuclideanMetric
Computes the average of pointwise Euclidean distances between two sequential data.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between the features, rather than directly between the sequential data.
It is used to extract features before computing the distance.
Notes
The distance between two 2D sequential data:
s1 s2
0* a *0
\ 
\ 
1* 
 b *1
 \
2* \
c *2
is equal to \((a+b+c)/3\) where \(a\) is the Euclidean distance between s1[0] and s2[0], \(b\) between s1[1] and s2[1] and \(c\) between s1[2] and s2[2].
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods

Checks if features can be used by metric.dist based on their shape. 

Computes a distance between two data points based on their features. 
Cluster
dipy.segment.clustering.
Cluster
(id=0, indices=None, refdata=<dipy.segment.clustering.Identity object>)Bases: object
Provides functionalities for interacting with a cluster.
Useful container to retrieve index of elements grouped together. If a reference to the data is provided to cluster_map, elements will be returned instead of their index when possible.
Reference to the set of clusters this cluster is being part of.
Id of this cluster in its associated cluster_map object.
Actual elements that clustered indices refer to.
Notes
A cluster does not contain actual data but instead knows how to retrieve them using its ClusterMap object.
Methods

Assigns indices to this cluster. 
ClusterCentroid
dipy.segment.clustering.
ClusterCentroid
(centroid, id=0, indices=None, refdata=<dipy.segment.clustering.Identity object>)Bases: dipy.segment.clustering.Cluster
Provides functionalities for interacting with a cluster.
Useful container to retrieve the indices of elements grouped together and the cluster’s centroid. If a reference to the data is provided to cluster_map, elements will be returned instead of their index when possible.
Reference to the set of clusters this cluster is being part of.
Id of this cluster in its associated cluster_map object.
Actual elements that clustered indices refer to.
Notes
A cluster does not contain actual data but instead knows how to retrieve them using its ClusterMapCentroid object.
Methods

Assigns a data point to this cluster. 

Update centroid of this cluster. 
__init__
(self, centroid, id=0, indices=None, refdata=<dipy.segment.clustering.Identity object at 0x1108ff610>)Initialize self. See help(type(self)) for accurate signature.
ClusterMap
dipy.segment.clustering.
ClusterMap
(refdata=<dipy.segment.clustering.Identity object>)Bases: object
Provides functionalities for interacting with clustering outputs.
Useful container to create, remove, retrieve and filter clusters. If refdata is given, elements will be returned instead of their index when using Cluster objects.
Actual elements that clustered indices refer to.
Methods

Adds one or multiple clusters to this cluster map. 

Remove all clusters from this cluster map. 

Gets the size of every cluster contained in this cluster map. 

Gets clusters which contains at least min_size elements. 

Gets clusters which contains at most max_size elements. 

Remove one or multiple clusters from this cluster map. 

Gets number of clusters contained in this cluster map. 
__init__
(self, refdata=<dipy.segment.clustering.Identity object at 0x110867d10>)Initialize self. See help(type(self)) for accurate signature.
add_cluster
(self, *clusters)Adds one or multiple clusters to this cluster map.
Cluster(s) to be added in this cluster map.
clusters_sizes
(self)Gets the size of every cluster contained in this cluster map.
Sizes of every cluster in this cluster map.
get_large_clusters
(self, min_size)Gets clusters which contains at least min_size elements.
Minimum number of elements a cluster needs to have to be selected.
Clusters having at least min_size elements.
get_small_clusters
(self, max_size)Gets clusters which contains at most max_size elements.
Maximum number of elements a cluster can have to be selected.
Clusters having at most max_size elements.
ClusterMapCentroid
dipy.segment.clustering.
ClusterMapCentroid
(refdata=<dipy.segment.clustering.Identity object>)Bases: dipy.segment.clustering.ClusterMap
Provides functionalities for interacting with clustering outputs that have centroids.
Allows to retrieve easely the centroid of every cluster. Also, it is a useful container to create, remove, retrieve and filter clusters. If refdata is given, elements will be returned instead of their index when using ClusterCentroid objects.
Actual elements that clustered indices refer to.
Methods

Adds one or multiple clusters to this cluster map. 

Remove all clusters from this cluster map. 

Gets the size of every cluster contained in this cluster map. 

Gets clusters which contains at least min_size elements. 

Gets clusters which contains at most max_size elements. 

Remove one or multiple clusters from this cluster map. 

Gets number of clusters contained in this cluster map. 
Clustering
dipy.segment.clustering.
Clustering
Bases: object
Methods

Clusters data. 
cluster
(self, data, ordering=None)Clusters data.
Subclasses will perform their clustering algorithm here.
Each array represents a data point.
Specifies the order in which data points will be clustered.
Result of the clustering.
Identity
dipy.segment.clustering.
Identity
Bases: object
Provides identity indexing functionality.
This can replace any class supporting indexing used for referencing (e.g. list, tuple). Indexing an instance of this class will return the index provided instead of the element. It does not support slicing.
Metric
dipy.segment.clustering.
Metric
Bases: object
Computes a distance between two sequential data.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between extracted features, rather than directly between the sequential data.
It is used to extract features before computing the distance.
Notes
When subclassing Metric, one only needs to override the dist and are_compatible methods.
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods
Checks if features can be used by metric.dist based on their shape. 


Computes a distance between two data points based on their features. 
are_compatible
()Checks if features can be used by metric.dist based on their shape.
Basically this method exists so we don’t have to do this check inside the metric.dist function (speedup).
shape of the first data point’s features
shape of the second data point’s features
whether or not shapes are compatible
MinimumAverageDirectFlipMetric
dipy.segment.clustering.
MinimumAverageDirectFlipMetric
Bases: dipy.segment.metricspeed.AveragePointwiseEuclideanMetric
Computes the MDF distance (minimum average directflip) between two sequential data.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
Notes
The distance between two 2D sequential data:
s1 s2
0* a *0
\ 
\ 
1* 
 b *1
 \
2* \
c *2
is equal to \(\min((a+b+c)/3, (a'+b'+c')/3)\) where \(a\) is the Euclidean distance between s1[0] and s2[0], \(b\) between s1[1] and s2[1], \(c\) between s1[2] and s2[2], \(a'\) between s1[0] and s2[2], \(b'\) between s1[1] and s2[1] and \(c'\) between s1[2] and s2[0].
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods

Checks if features can be used by metric.dist based on their shape. 

Computes a distance between two data points based on their features. 
QuickBundles
dipy.segment.clustering.
QuickBundles
(threshold, metric='MDF_12points', max_nb_clusters=2147483647)Bases: dipy.segment.clustering.Clustering
Clusters streamlines using QuickBundles [R2491d57df3a8Garyfallidis12].
Given a list of streamlines, the QuickBundles algorithm sequentially assigns each streamline to its closest bundle in \(\mathcal{O}(Nk)\) where \(N\) is the number of streamlines and \(k\) is the final number of bundles. If for a given streamline its closest bundle is farther than threshold, a new bundle is created and the streamline is assigned to it except if the number of bundles has already exceeded max_nb_clusters.
The maximum distance from a bundle for a streamline to be still considered as part of it.
The distance metric to use when comparing two streamlines. By default, the Minimum average DirectFlip (MDF) distance [R2491d57df3a8Garyfallidis12] is used and streamlines are automatically resampled so they have 12 points.
Limits the creation of bundles.
References
Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
Examples
>>> from dipy.segment.clustering import QuickBundles
>>> from dipy.data import get_fnames
>>> from dipy.io.streamline import load_tractogram
>>> from dipy.tracking.streamline import Streamlines
>>> fname = get_fnames('fornix')
>>> fornix = load_tractogram(fname, 'same',
... bbox_valid_check=False).streamlines
>>> streamlines = Streamlines(fornix)
>>> # Segment fornix with a threshold of 10mm and streamlines resampled
>>> # to 12 points.
>>> qb = QuickBundles(threshold=10.)
>>> clusters = qb.cluster(streamlines)
>>> len(clusters)
4
>>> list(map(len, clusters))
[61, 191, 47, 1]
>>> # Resampling streamlines differently is done explicitly as follows.
>>> # Note this has an impact on the speed and the accuracy (tradeoff).
>>> from dipy.segment.metric import ResampleFeature
>>> from dipy.segment.metric import AveragePointwiseEuclideanMetric
>>> feature = ResampleFeature(nb_points=2)
>>> metric = AveragePointwiseEuclideanMetric(feature)
>>> qb = QuickBundles(threshold=10., metric=metric)
>>> clusters = qb.cluster(streamlines)
>>> len(clusters)
4
>>> list(map(len, clusters))
[58, 142, 72, 28]
Methods

Clusters streamlines into bundles. 
__init__
(self, threshold, metric='MDF_12points', max_nb_clusters=2147483647)Initialize self. See help(type(self)) for accurate signature.
cluster
(self, streamlines, ordering=None)Clusters streamlines into bundles.
Performs quickbundles algorithm using predefined metric and threshold.
Each 2D array represents a sequence of 3D points (points, 3).
Specifies the order in which data points will be clustered.
Result of the clustering.
QuickBundlesX
dipy.segment.clustering.
QuickBundlesX
(thresholds, metric='MDF_12points')Bases: dipy.segment.clustering.Clustering
Clusters streamlines using QuickBundlesX.
Thresholds to use for each clustering layer. A threshold represents the maximum distance from a cluster for a streamline to be still considered as part of it.
The distance metric to use when comparing two streamlines. By default, the Minimum average DirectFlip (MDF) distance [R88276b257c2bGaryfallidis12] is used and streamlines are automatically resampled so they have 12 points.
References
Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
Garyfallidis E. et al. QuickBundlesX: Sequential clustering of millions of streamlines in multiple levels of detail at record execution time. Proceedings of the, International Society of Magnetic Resonance in Medicine (ISMRM). Singapore, 4187, 2016.
Methods

Clusters streamlines into bundles. 
__init__
(self, thresholds, metric='MDF_12points')Initialize self. See help(type(self)) for accurate signature.
cluster
(self, streamlines, ordering=None)Clusters streamlines into bundles.
Performs QuickbundleX using a predefined metric and thresholds.
Each 2D array represents a sequence of 3D points (points, 3).
Specifies the order in which data points will be clustered.
Result of the clustering.
ResampleFeature
dipy.segment.clustering.
ResampleFeature
Bases: dipy.segment.featurespeed.CythonFeature
Extracts features from a sequential datum.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
The features being extracted are the points of the sequence once resampled. This is useful for metrics requiring a constant number of points for all
streamlines.
is_order_invariant
Is this feature invariant to the sequence’s ordering
Methods

Extracts features from a sequential datum. 

Infers the shape of features extracted from a sequential datum. 
TreeCluster
dipy.segment.clustering.
TreeCluster
(threshold, centroid, indices=None)Bases: dipy.segment.clustering.ClusterCentroid
Methods

Assigns a data point to this cluster. 

Update centroid of this cluster. 
add 
TreeClusterMap
dipy.segment.clustering.
TreeClusterMap
(root)Bases: dipy.segment.clustering.ClusterMap
Methods

Adds one or multiple clusters to this cluster map. 

Remove all clusters from this cluster map. 

Gets the size of every cluster contained in this cluster map. 

Gets clusters which contains at least min_size elements. 

Gets clusters which contains at most max_size elements. 

Remove one or multiple clusters from this cluster map. 

Gets number of clusters contained in this cluster map. 
get_clusters 

iter_preorder 

traverse_postorder 
dipy.segment.clustering.
abstractmethod
(funcobj)A decorator indicating abstract methods.
Requires that the metaclass is ABCMeta or derived from it. A class that has a metaclass derived from ABCMeta cannot be instantiated unless all of its abstract methods are overridden. The abstract methods can be called using any of the normal ‘super’ call mechanisms.
Usage:
 class C(metaclass=ABCMeta):
@abstractmethod def my_abstract_method(self, …):
…
dipy.segment.clustering.
qbx_and_merge
(streamlines, thresholds, nb_pts=20, select_randomly=None, rng=None, verbose=False)Run QuickBundlesX and then run again on the centroids of the last layer
Running again QuickBundles at a layer has the effect of merging some of the clusters that maybe originally devided because of branching. This function help obtain a result at a QuickBundles quality but with QuickBundlesX speed. The merging phase has low cost because it is applied only on the centroids rather than the entire dataset.
List of distance thresholds for QuickBundlesX.
Number of points for discretizing each streamline
Randomly select a specific number of streamlines. If None all the streamlines are used.
If None then RandomState is initialized internally.
If True, log information. Default False.
Contains the clusters of the last layer of QuickBundlesX after merging.
References
Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
Garyfallidis E. et al. QuickBundlesX: Sequential clustering of millions of streamlines in multiple levels of detail at record execution time. Proceedings of the, International Society of Magnetic Resonance in Medicine (ISMRM). Singapore, 4187, 2016.
dipy.segment.clustering.
set_number_of_points
()(either by downsampling or upsampling)
Change the number of points of streamlines in order to obtain nb_points1 segments of equal length. Points of streamlines will be modified along the curve.
dipy.tracking.Streamlines
If ndarray, must have shape (N,3) where N is the number of points
of the streamline.
If list, each item must be ndarray shape (Ni,3) where Ni is the number
of points of streamline i.
If dipy.tracking.Streamlines
, its common_shape must be 3.
integer representing number of points wanted along the curve.
dipy.tracking.Streamlines
Results of the downsampling or upsampling process.
Examples
>>> from dipy.tracking.streamline import set_number_of_points
>>> import numpy as np
One streamline, a semicircle:
>>> theta = np.pi*np.linspace(0, 1, 100)
>>> x = np.cos(theta)
>>> y = np.sin(theta)
>>> z = 0 * x
>>> streamline = np.vstack((x, y, z)).T
>>> modified_streamline = set_number_of_points(streamline, 3)
>>> len(modified_streamline)
3
Multiple streamlines:
>>> streamlines = [streamline, streamline[::2]]
>>> new_streamlines = set_number_of_points(streamlines, 10)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in new_streamlines]
[10, 10]
dipy.segment.mask.
applymask
(vol, mask)Mask vol with mask.
Array with \(V\) dimensions
Binary mask. Has \(M\) dimensions where \(M <= V\). When \(M < V\), we
append \(V  M\) dimensions with axis length 1 to mask so that mask
will broadcast against vol. In the typical case vol can be 4D,
mask can be 3D, and we append a 1 to the mask shape which (via numpy
broadcasting) has the effect of appling the 3D mask to each 3D slice in
vol (vol[..., 0]
to vol[..., 1
).
vol multiplied by mask where mask may have been extended to match extra dimensions in vol
dipy.segment.mask.
binary_dilation
(input, structure=None, iterations=1, mask=None, output=None, border_value=0, origin=0, brute_force=False)Multidimensional binary dilation with the given structuring element.
Binary array_like to be dilated. Nonzero (True) elements form the subset to be dilated.
Structuring element used for the dilation. Nonzero elements are considered True. If no structuring element is provided an element is generated with a square connectivity equal to one.
The dilation is repeated iterations times (one, by default). If iterations is less than 1, the dilation is repeated until the result does not change anymore. Only an integer of iterations is accepted.
If a mask is given, only those elements with a True value at the corresponding mask element are modified at each iteration.
Array of the same shape as input, into which the output is placed. By default, a new array is created.
Value at the border in the output array.
Placement of the filter, by default 0.
Memory condition: if False, only the pixels whose value was changed in the last iteration are tracked as candidates to be updated (dilated) in the current iteration; if True all pixels are considered as candidates for dilation, regardless of what happened in the previous iteration. False by default.
Dilation of the input by the structuring element.
See also
grey_dilation
, binary_erosion
, binary_closing
, binary_opening
generate_binary_structure
Notes
Dilation [1] is a mathematical morphology operation [2] that uses a structuring element for expanding the shapes in an image. The binary dilation of an image by a structuring element is the locus of the points covered by the structuring element, when its center lies within the nonzero points of the image.
References
Examples
>>> from scipy import ndimage
>>> a = np.zeros((5, 5))
>>> a[2, 2] = 1
>>> a
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])
>>> ndimage.binary_dilation(a)
array([[False, False, False, False, False],
[False, False, True, False, False],
[False, True, True, True, False],
[False, False, True, False, False],
[False, False, False, False, False]], dtype=bool)
>>> ndimage.binary_dilation(a).astype(a.dtype)
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.]])
>>> # 3x3 structuring element with connectivity 1, used by default
>>> struct1 = ndimage.generate_binary_structure(2, 1)
>>> struct1
array([[False, True, False],
[ True, True, True],
[False, True, False]], dtype=bool)
>>> # 3x3 structuring element with connectivity 2
>>> struct2 = ndimage.generate_binary_structure(2, 2)
>>> struct2
array([[ True, True, True],
[ True, True, True],
[ True, True, True]], dtype=bool)
>>> ndimage.binary_dilation(a, structure=struct1).astype(a.dtype)
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.]])
>>> ndimage.binary_dilation(a, structure=struct2).astype(a.dtype)
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 0.]])
>>> ndimage.binary_dilation(a, structure=struct1,\
... iterations=2).astype(a.dtype)
array([[ 0., 0., 1., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 1., 0., 0.]])
dipy.segment.mask.
color_fa
(fa, evecs)Color fractional anisotropy of diffusion tensor
Array of the fractional anisotropy (can be 1D, 2D or 3D)
eigen vectors from the tensor model
Colormap of the FA with red for the x value, y for the green value and z for the blue value.
ec{e})) imes fa
dipy.segment.mask.
fractional_anisotropy
(evals, axis=1)Fractional anisotropy (FA) of a diffusion tensor.
Eigenvalues of a diffusion tensor.
Axis of evals which contains 3 eigenvalues.
Calculated FA. Range is 0 <= FA <= 1.
Notes
FA is calculated using the following equation:
dipy.segment.mask.
generate_binary_structure
(rank, connectivity)Generate a binary structure for binary morphological operations.
Number of dimensions of the array to which the structuring element will be applied, as returned by np.ndim.
connectivity determines which elements of the output array belong to the structure, i.e. are considered as neighbors of the central element. Elements up to a squared distance of connectivity from the center are considered neighbors. connectivity may range from 1 (no diagonal elements are neighbors) to rank (all elements are neighbors).
Structuring element which may be used for binary morphological operations, with rank dimensions and all dimensions equal to 3.
See also
iterate_structure
, binary_dilation
, binary_erosion
Notes
generate_binary_structure can only create structuring elements with dimensions equal to 3, i.e. minimal dimensions. For larger structuring elements, that are useful e.g. for eroding large objects, one may either use iterate_structure, or create directly custom arrays with numpy functions such as numpy.ones.
Examples
>>> from scipy import ndimage
>>> struct = ndimage.generate_binary_structure(2, 1)
>>> struct
array([[False, True, False],
[ True, True, True],
[False, True, False]], dtype=bool)
>>> a = np.zeros((5,5))
>>> a[2, 2] = 1
>>> a
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])
>>> b = ndimage.binary_dilation(a, structure=struct).astype(a.dtype)
>>> b
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.]])
>>> ndimage.binary_dilation(b, structure=struct).astype(a.dtype)
array([[ 0., 0., 1., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 1., 0., 0.]])
>>> struct = ndimage.generate_binary_structure(2, 2)
>>> struct
array([[ True, True, True],
[ True, True, True],
[ True, True, True]], dtype=bool)
>>> struct = ndimage.generate_binary_structure(3, 1)
>>> struct # no diagonal elements
array([[[False, False, False],
[False, True, False],
[False, False, False]],
[[False, True, False],
[ True, True, True],
[False, True, False]],
[[False, False, False],
[False, True, False],
[False, False, False]]], dtype=bool)
dipy.segment.mask.
median_filter
(input, size=None, footprint=None, output=None, mode='reflect', cval=0.0, origin=0)Calculate a multidimensional median filter.
The input array.
See footprint, below. Ignored if footprint is given.
Either size or footprint must be defined. size gives
the shape that is taken from the input array, at every element
position, to define the input to the filter function.
footprint is a boolean array that specifies (implicitly) a
shape, but also which of the elements within this shape will get
passed to the filter function. Thus size=(n,m)
is equivalent
to footprint=np.ones((n,m))
. We adjust size to the number
of dimensions of the input array, so that, if the input array is
shape (10,10,10), and size is 2, then the actual size used is
(2,2,2). When footprint is given, size is ignored.
The array in which to place the output, or the dtype of the returned array. By default an array of the same dtype as input will be created.
The mode parameter determines how the input array is extended when the filter overlaps a border. By passing a sequence of modes with length equal to the number of dimensions of the input array, different modes can be specified along each axis. Default value is ‘reflect’. The valid values and their behavior is as follows:
The input is extended by reflecting about the edge of the last pixel.
The input is extended by filling all values beyond the edge with the same constant value, defined by the cval parameter.
The input is extended by replicating the last pixel.
The input is extended by reflecting about the center of the last pixel.
The input is extended by wrapping around to the opposite edge.
Value to fill past edges of input if mode is ‘constant’. Default is 0.0.
Controls the placement of the filter on the input array’s pixels. A value of 0 (the default) centers the filter over the pixel, with positive values shifting the filter to the left, and negative ones to the right. By passing a sequence of origins with length equal to the number of dimensions of the input array, different shifts can be specified along each axis.
Filtered array. Has the same shape as input.
Examples
>>> from scipy import ndimage, misc
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> plt.gray() # show the filtered result in grayscale
>>> ax1 = fig.add_subplot(121) # left side
>>> ax2 = fig.add_subplot(122) # right side
>>> ascent = misc.ascent()
>>> result = ndimage.median_filter(ascent, size=20)
>>> ax1.imshow(ascent)
>>> ax2.imshow(result)
>>> plt.show()
dipy.segment.mask.
median_otsu
(input_volume, vol_idx=None, median_radius=4, numpass=4, autocrop=False, dilate=None)Simple brain extraction tool method for images from DWI data.
It uses a median filter smoothing of the input_volumes vol_idx and an automatic histogram Otsu thresholding technique, hence the name median_otsu.
This function is inspired from Mrtrix’s bet which has default values
median_radius=3
, numpass=2
. However, from tests on multiple 1.5T
and 3T data from GE, Philips, Siemens, the most robust choice is
median_radius=4
, numpass=4
.
3D or 4D array of the brain volume.
1D array representing indices of axis=3
of a 4D input_volume.
None is only an acceptable input if input_volume
is 3D.
Radius (in voxels) of the applied median filter (default: 4).
Number of pass of the median filter (default: 4).
if True, the masked input_volume will also be cropped using the bounding box defined by the masked data. Should be on if DWI is upsampled to 1x1x1 resolution. (default: False).
number of iterations for binary dilation
Masked input_volume
The binary brain mask
Notes
Copyright (C) 2011, the scikitimage team All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of skimage nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
dipy.segment.mask.
multi_median
(input, median_radius, numpass)Applies median filter multiple times on input data.
The input volume to apply filter on.
Radius (in voxels) of the applied median filter
Number of pass of the median filter
Filtered input volume.
dipy.segment.mask.
otsu
(image, nbins=256)Return threshold value based on Otsu’s method.
Grayscale input image.
Number of bins used to calculate histogram. This value is ignored for integer arrays.
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.
If image only contains a single grayscale value.
Notes
The input image must be grayscale.
References
Wikipedia, https://en.wikipedia.org/wiki/Otsu’s_Method
Examples
>>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_otsu(image)
>>> binary = image <= thresh
dipy.segment.mask.
segment_from_cfa
(tensor_fit, roi, threshold, return_cfa=False)Segment the cfa inside roi using the values from threshold as bounds.
TensorFit object
A binary mask, which contains the bounding box for the segmentation.
An iterable that defines the min and max values to use for the thresholding. The values are specified as (R_min, R_max, G_min, G_max, B_min, B_max)
If True, the cfa is also returned.
Binary mask of the segmentation.
Array with shape = (…, 3), where … is the shape of tensor_fit. The color fractional anisotropy, ordered as a nd array with the last dimension of size 3 for the R, G and B channels.
ArcLengthFeature
dipy.segment.metric.
ArcLengthFeature
Bases: dipy.segment.featurespeed.CythonFeature
Extracts features from a sequential datum.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
The feature being extracted consists of one scalar representing the arc length of the sequence (i.e. the sum of the length of all segments).
is_order_invariant
Is this feature invariant to the sequence’s ordering
Methods

Extracts features from a sequential datum. 

Infers the shape of features extracted from a sequential datum. 
AveragePointwiseEuclideanMetric
dipy.segment.metric.
AveragePointwiseEuclideanMetric
Bases: dipy.segment.metricspeed.SumPointwiseEuclideanMetric
Computes the average of pointwise Euclidean distances between two sequential data.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between the features, rather than directly between the sequential data.
It is used to extract features before computing the distance.
Notes
The distance between two 2D sequential data:
s1 s2
0* a *0
\ 
\ 
1* 
 b *1
 \
2* \
c *2
is equal to \((a+b+c)/3\) where \(a\) is the Euclidean distance between s1[0] and s2[0], \(b\) between s1[1] and s2[1] and \(c\) between s1[2] and s2[2].
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods

Checks if features can be used by metric.dist based on their shape. 

Computes a distance between two data points based on their features. 
CenterOfMassFeature
dipy.segment.metric.
CenterOfMassFeature
Bases: dipy.segment.featurespeed.CythonFeature
Extracts features from a sequential datum.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
The feature being extracted consists of one Ndimensional point representing the mean of the points, i.e. the center of mass.
is_order_invariant
Is this feature invariant to the sequence’s ordering
Methods

Extracts features from a sequential datum. 

Infers the shape of features extracted from a sequential datum. 
CosineMetric
dipy.segment.metric.
CosineMetric
Bases: dipy.segment.metricspeed.CythonMetric
Computes the cosine distance between two vectors.
A vector (i.e. a Ndimensional point) is represented as a 2D array with shape (1, nb_dimensions).
Notes
The distance between two vectors \(v_1\) and \(v_2\) is equal to \(\frac{1}{\pi} \arccos\left(\frac{v_1 \cdot v_2}{\v_1\ \v_2\}\right)\) and is bounded within \([0,1]\).
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods

Checks if features can be used by metric.dist based on their shape. 

Computes a distance between two data points based on their features. 
Feature
dipy.segment.metric.
Feature
Bases: object
Extracts features from a sequential datum.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
tells if this feature is invariant to the sequence’s ordering. This means starting from either extremities produces the same features. (Default: True)
Notes
When subclassing Feature, one only needs to override the extract and infer_shape methods.
is_order_invariant
Is this feature invariant to the sequence’s ordering
Methods

Extracts features from a sequential datum. 
Infers the shape of features extracted from a sequential datum. 
extract
()Extracts features from a sequential datum.
Sequence of Ndimensional points.
Features extracted from datum.
IdentityFeature
dipy.segment.metric.
IdentityFeature
Bases: dipy.segment.featurespeed.CythonFeature
Extracts features from a sequential datum.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
The features being extracted are the actual sequence’s points. This is useful for metric that does not require any preprocessing.
is_order_invariant
Is this feature invariant to the sequence’s ordering
Methods

Extracts features from a sequential datum. 

Infers the shape of features extracted from a sequential datum. 
Metric
dipy.segment.metric.
Metric
Bases: object
Computes a distance between two sequential data.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between extracted features, rather than directly between the sequential data.
It is used to extract features before computing the distance.
Notes
When subclassing Metric, one only needs to override the dist and are_compatible methods.
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods
Checks if features can be used by metric.dist based on their shape. 


Computes a distance between two data points based on their features. 
are_compatible
()Checks if features can be used by metric.dist based on their shape.
Basically this method exists so we don’t have to do this check inside the metric.dist function (speedup).
shape of the first data point’s features
shape of the second data point’s features
whether or not shapes are compatible
MidpointFeature
dipy.segment.metric.
MidpointFeature
Bases: dipy.segment.featurespeed.CythonFeature
Extracts features from a sequential datum.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
The feature being extracted consists of one Ndimensional point representing the middle point of the sequence (i.e. `nb_points//2`th point).
is_order_invariant
Is this feature invariant to the sequence’s ordering
Methods

Extracts features from a sequential datum. 

Infers the shape of features extracted from a sequential datum. 
MinimumAverageDirectFlipMetric
dipy.segment.metric.
MinimumAverageDirectFlipMetric
Bases: dipy.segment.metricspeed.AveragePointwiseEuclideanMetric
Computes the MDF distance (minimum average directflip) between two sequential data.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
Notes
The distance between two 2D sequential data:
s1 s2
0* a *0
\ 
\ 
1* 
 b *1
 \
2* \
c *2
is equal to \(\min((a+b+c)/3, (a'+b'+c')/3)\) where \(a\) is the Euclidean distance between s1[0] and s2[0], \(b\) between s1[1] and s2[1], \(c\) between s1[2] and s2[2], \(a'\) between s1[0] and s2[2], \(b'\) between s1[1] and s2[1] and \(c'\) between s1[2] and s2[0].
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods

Checks if features can be used by metric.dist based on their shape. 

Computes a distance between two data points based on their features. 
ResampleFeature
dipy.segment.metric.
ResampleFeature
Bases: dipy.segment.featurespeed.CythonFeature
Extracts features from a sequential datum.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
The features being extracted are the points of the sequence once resampled. This is useful for metrics requiring a constant number of points for all
streamlines.
is_order_invariant
Is this feature invariant to the sequence’s ordering
Methods

Extracts features from a sequential datum. 

Infers the shape of features extracted from a sequential datum. 
SumPointwiseEuclideanMetric
dipy.segment.metric.
SumPointwiseEuclideanMetric
Bases: dipy.segment.metricspeed.CythonMetric
Computes the sum of pointwise Euclidean distances between two sequential data.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A feature object can be specified in order to calculate the distance between the features, rather than directly between the sequential data.
It is used to extract features before computing the distance.
Notes
The distance between two 2D sequential data:
s1 s2
0* a *0
\ 
\ 
1* 
 b *1
 \
2* \
c *2
is equal to \(a+b+c\) where \(a\) is the Euclidean distance between s1[0] and s2[0], \(b\) between s1[1] and s2[1] and \(c\) between s1[2] and s2[2].
feature
Feature object used to extract features from sequential data
is_order_invariant
Is this metric invariant to the sequence’s ordering
Methods

Checks if features can be used by metric.dist based on their shape. 

Computes a distance between two data points based on their features. 
VectorOfEndpointsFeature
dipy.segment.metric.
VectorOfEndpointsFeature
Bases: dipy.segment.featurespeed.CythonFeature
Extracts features from a sequential datum.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
The feature being extracted consists of one vector in the Ndimensional space pointing from one endpoint of the sequence to the other (i.e. S[1]S[0]).
is_order_invariant
Is this feature invariant to the sequence’s ordering
Methods

Extracts features from a sequential datum. 

Infers the shape of features extracted from a sequential datum. 
dipy.segment.metric.
dist
()Computes a distance between datum1 and datum2.
A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
Tells how to compute the distance between datum1 and datum2.
Sequence of Ndimensional points.
Sequence of Ndimensional points.
Distance between two data points.
dipy.segment.metric.
distance_matrix
()Computes the distance matrix between two lists of sequential data.
The distance matrix is obtained by computing the pairwise distance of all tuples spawn by the Cartesian product of data1 with data2. If data2 is not provided, the Cartesian product of data1 with itself is used instead. A sequence of Ndimensional points is represented as a 2D array with shape (nb_points, nb_dimensions).
Tells how to compute the distance between two sequential data.
List of sequences of Ndimensional points.
Llist of sequences of Ndimensional points.
Distance matrix.
dipy.segment.metric.
mdf
(s1, s2)Computes the MDF (Minimum average DirectFlip) distance [Garyfallidis12] between two streamlines.
Streamlines must have the same number of points.
A streamline (sequence of Ndimensional points).
A streamline (sequence of Ndimensional points).
Distance between two streamlines.
References
dipy.segment.threshold.
otsu
(image, nbins=256)Return threshold value based on Otsu’s method. Copied from scikitimage to remove dependency.
Input image.
Number of bins used to calculate histogram. This value is ignored for integer arrays.
Threshold value.
dipy.segment.threshold.
upper_bound_by_percent
(data, percent=1)Find the upper bound for visualization of medical images
Calculate the histogram of the image and go right to left until you find the bound that contains more than a percentage of the image.
dipy.segment.threshold.
upper_bound_by_rate
(data, rate=0.05)Adjusts upper intensity boundary using rates
It calculates the image intensity histogram, and based on the rate value it decide what is the upperbound value for intensity normalization, usually lower bound is 0. The rate is the ratio between the amount of pixels in every bins and the bins with highest pixel amount
Input intensity value data
representing the threshold whether a spicific histogram bin that should be count in the normalization range
the upper_bound value for normalization
ConstantObservationModel
dipy.segment.tissue.
ConstantObservationModel
Bases: object
Observation model assuming that the intensity of each class is constant. The model parameters are the means \(\mu_{k}\) and variances \(\sigma_{k}\) associated with each tissue class. According to this model, the observed intensity at voxel \(x\) is given by \(I(x) = \mu_{k} + \eta_{k}\) where \(k\) is the tissue class of voxel \(x\), and \(\eta_{k}\) is a Gaussian random variable with zero mean and variance \(\sigma_{k}^{2}\). The observation model is responsible for computing the negative loglikelihood of observing any given intensity \(z\) at each voxel \(x\) assuming the voxel belongs to each class \(k\). It also provides a default parameter initialization.
Methods

Initializes the means and variances uniformly 

Computes the gaussian negative loglikelihood of each class at each voxel of image assuming a gaussian distribution with means and variances given by mu and sigmasq, respectively (constant models along the full volume). 

Conditional probability of the label given the image 

Mean and standard variation for N desired tissue classes 

Updates the means and the variances in each iteration for all the labels. 

Updates the means and the variances in each iteration for all the labels. 
initialize_param_uniform
(self, image, nclasses)Initializes the means and variances uniformly
The means are initialized uniformly along the dynamic range of image. The variances are set to 1 for all classes
3D structural image
number of desired classes
1 x nclasses, mean for each class
1 x nclasses, standard deviation for each class. Set up to 1.0 for all classes.
negloglikelihood
(self, image, mu, sigmasq, nclasses)Computes the gaussian negative loglikelihood of each class at each voxel of image assuming a gaussian distribution with means and variances given by mu and sigmasq, respectively (constant models along the full volume). The negative loglikelihood will be written in nloglike.
3D gray scale structural image
mean of each class
variance of each class
number of classes
4D negloglikelihood for each class in each volume
prob_image
(self, img, nclasses, mu, sigmasq, P_L_N)Conditional probability of the label given the image
3D structural grayscale image
number of tissue classes
1 x nclasses, current estimate of the mean of each tissue class
1 x nclasses, current estimate of the variance of each tissue class
4D probability map of the label given the neighborhood.
4D probability of the label given the input image
seg_stats
(self, input_image, seg_image, nclass)Mean and standard variation for N desired tissue classes
3D structural image
3D segmented image
number of classes (3 in most cases)
1 x nclasses dimension Mean and standard deviation for each class
update_param
(self, image, P_L_Y, mu, nclasses)Updates the means and the variances in each iteration for all the labels. This is for equations 25 and 26 of Zhang et. al., IEEE Trans. Med. Imag, Vol. 20, No. 1, Jan 2001.
3D structural grayscale image
4D probability map of the label given the input image computed by the expectation maximization (EM) algorithm
1 x nclasses, current estimate of the mean of each tissue class.
number of tissue classes
1 x nclasses, updated mean of each tissue class
1 x nclasses, updated variance of each tissue class
update_param_new
(self, image, P_L_Y, mu, nclasses)Updates the means and the variances in each iteration for all the labels. This is for equations 25 and 26 of the Zhang et al. paper
3D structural grayscale image
4D probability map of the label given the input image computed by the expectation maximization (EM) algorithm
1 x nclasses, current estimate of the mean of each tissue class.
number of tissue classes
1 x nclasses, updated mean of each tissue class
1 x nclasses, updated variance of each tissue class
IteratedConditionalModes
dipy.segment.tissue.
IteratedConditionalModes
Bases: object
Methods

Executes one iteration of the ICM algorithm for MRF MAP estimation. 

Initializes the segmentation of an image with given 

Conditional probability of the label given the neighborhood Equation 2.18 of the Stan Z. 
icm_ising
(self, nloglike, beta, seg)Executes one iteration of the ICM algorithm for MRF MAP estimation. The prior distribution of the MRF is a Gibbs distribution with the Potts/Ising model with parameter beta:
https://en.wikipedia.org/wiki/Potts_model
4D shape, nloglike[x,y,z,k] is the negative log likelihood of class k at voxel (x,y,z)
positive scalar, it is the parameter of the Potts/Ising model. Determines the smoothness of the output segmentation.
3D initial segmentation. This segmentation will change by one iteration of the ICM algorithm
3D final segmentation
3D final energy
initialize_maximum_likelihood
(self, nloglike)negloglikelihood
Initializes the segmentation of an image with negloglikelihood field given by nloglike. The class of each voxel is selected as the one with the minimum negloglikelihood (i.e. maximumlikelihood segmentation).
4D shape, nloglike[x,y,z,k] is the likelihhood of class k for voxel (x, y, z)
3D initial segmentation
prob_neighborhood
(self, seg, beta, nclasses)Conditional probability of the label given the neighborhood Equation 2.18 of the Stan Z. Li book (Stan Z. Li, Markov Random Field Modeling in Image Analysis, 3rd ed., Advances in Pattern Recognition Series, Springer Verlag 2009.)
3D tissue segmentation derived from the ICM model
scalar that determines the importance of the neighborhood and the spatial smoothness of the segmentation. Usually between 0 to 0.5
number of tissue classes
4D probability map of the label given the neighborhood of the voxel.
TissueClassifierHMRF
dipy.segment.tissue.
TissueClassifierHMRF
(save_history=False, verbose=True)Bases: object
This class contains the methods for tissue classification using the Markov Random Fields modeling approach
Methods

This method uses the Maximum a posteriori  Markov Random Field approach for segmentation by using the Iterative Conditional Modes and Expectation Maximization to estimate the parameters. 
__init__
(self, save_history=False, verbose=True)Initialize self. See help(type(self)) for accurate signature.
classify
(self, image, nclasses, beta, tolerance=None, max_iter=None)This method uses the Maximum a posteriori  Markov Random Field approach for segmentation by using the Iterative Conditional Modes and Expectation Maximization to estimate the parameters.
3D structural image.
number of desired classes.
smoothing parameter, the higher this number the smoother the output will be.
value that defines the percentage of change tolerated to prevent the ICM loop to stop. Default is 1e05.
fixed number of desired iterations. Default is 100. If the user only specifies this parameter, the tolerance value will not be considered. If none of these two parameters
3D segmented image with all tissue types specified in nclasses.
3D final refined segmentation containing all tissue types.
3D probability map of each tissue type.
dipy.segment.tissue.
add_noise
(signal, snr, S0, noise_type='rician')Add noise of specified distribution to the signal from a single voxel.
The signal in the voxel.
The desired signaltonoise ratio. (See notes below.) If snr is None, return the signal asis.
Reference signal for specifying snr.
The distribution of noise added. Can be either ‘gaussian’ for Gaussian distributed noise, ‘rician’ for Ricedistributed noise (default) or ‘rayleigh’ for a Rayleigh distribution.
Signal with added noise.
Notes
SNR is defined here, following [1], as S0 / sigma
, where sigma
is
the standard deviation of the two Gaussian distributions forming the real
and imaginary components of the Rician noise distribution (see [2]).
References
Descoteaux, Angelino, Fitzgibbons and Deriche (2007) Regularized, fast and robust qball imaging. MRM, 58: 497510
Gudbjartson and Patz (2008). The Rician distribution of noisy MRI data. MRM 34: 910914.
Examples
>>> signal = np.arange(800).reshape(2, 2, 2, 100)
>>> signal_w_noise = add_noise(signal, 10., 100., noise_type='rician')