workflows
workflows.align
workflows.base
workflows.combined_workflow
workflows.denoise
workflows.docstring_parser
workflows.flow_runner
workflows.io
workflows.mask
workflows.multi_io
workflows.reconst
workflows.segment
workflows.stats
workflows.tracking
workflows.viz
workflows.workflow
AffineMap
AffineRegistration
AffineTransform3D
ApplyTransformFlow
CCMetric
DiffeomorphicMap
EMMetric
ImageRegistrationFlow
MutualInformationMetric
ResliceFlow
RigidTransform3D
SSDMetric
SlrWithQbxFlow
SymmetricDiffeomorphicRegistration
SynRegistrationFlow
TranslationTransform3D
Workflow
IntrospectiveArgumentParser
NumpyDocString
CombinedWorkflow
Workflow
GibbsRingingFlow
LPCAFlow
MPPCAFlow
NLMeansFlow
Workflow
NumpyDocString
Reader
IntrospectiveArgumentParser
FetchFlow
IoInfoFlow
SplitFlow
Workflow
MaskFlow
Workflow
IOIterator
ConstrainedSphericalDeconvModel
CsaOdfModel
DiffusionKurtosisModel
ReconstCSAFlow
ReconstCSDFlow
ReconstDkiFlow
ReconstDtiFlow
ReconstIvimFlow
ReconstMAPMRIFlow
TensorModel
Workflow
LabelsBundlesFlow
MedianOtsuFlow
RecoBundles
RecoBundlesFlow
Space
StatefulTractogram
Workflow
BundleAnalysisTractometryFlow
BundleShapeAnalysis
LinearMixedModelsFlow
SNRinCCFlow
Space
StatefulTractogram
TensorModel
Workflow
BinaryStoppingCriterion
ClosestPeakDirectionGetter
CmcStoppingCriterion
DeterministicMaximumDirectionGetter
LocalFiberTrackingPAMFlow
LocalTracking
PFTrackingPAMFlow
ParticleFilteringTracking
ProbabilisticDirectionGetter
Space
StatefulTractogram
ThresholdStoppingCriterion
Workflow
HorizonFlow
Workflow
Workflow
workflows
workflows.align
|
Methods |
|
Methods |
Methods |
|
|
Methods |
|
Methods |
|
Methods |
|
Methods |
|
The registration workflow is organized as a collection of different functions. |
|
Methods |
|
Methods |
Methods |
|
|
Methods |
|
Methods |
|
Methods |
|
Methods |
Methods |
|
|
Methods |
|
Check the dimensions of the input images. |
|
Load data and other information from a nifti file. |
|
Reslice data with new voxel resolution defined by |
|
Save a data array into a nifti file. |
|
Save Quality Assurance metrics. |
|
Utility function for registering large tractograms. |
|
Transformation to align the center of mass of the input images. |
|
Apply affine transformation to streamlines |
workflows.base
|
|
|
|
|
workflows.combined_workflow
|
Methods |
|
Methods |
workflows.denoise
|
Methods |
|
Methods |
|
Methods |
|
Methods |
|
Methods |
|
Standard deviation estimation from local patches |
|
Suppresses Gibbs ringing artefacts of images volumes. |
|
A general function for creating diffusion MR gradients. |
|
Load data and other information from a nifti file. |
|
Performs local PCA denoising according to Manjon et al. |
|
Performs PCA-based denoising using the Marcenko-Pastur distribution [1]. |
|
Non-local means for denoising 3D and 4D images |
PCA based local noise estimation. |
|
|
Read b-values and b-vectors from disk. |
|
Save a data array into a nifti file. |
workflows.docstring_parser
This was taken directly from the file docscrape.py of numpydoc package.
Copyright (C) 2008 Stefan van der Walt <stefan@mentat.za.net>, Pauli Virtanen <pav@iki.fi>
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
|
A line-based string reader. |
|
Deindent a list of lines maximally |
|
Issue a warning, or maybe ignore it or raise an exception. |
workflows.flow_runner
|
|
|
Transforms the logging level passed on the commandline into a proper logging level name. |
|
Wraps the process of building an argparser that reflects the workflow that we want to run along with some generic parameters like logging, force and output strategies. |
workflows.io
|
Methods |
|
Methods |
|
Methods |
|
Methods |
|
Get the names and default values of a callable object’s parameters. |
|
Return all members of an object as (name, value) pairs sorted by name. |
|
Return true if the object is a user-defined function. |
|
Load data and other information from a nifti file. |
|
Save a data array into a nifti file. |
workflows.mask
|
Methods |
|
Methods |
|
Load data and other information from a nifti file. |
|
Save a data array into a nifti file. |
workflows.multi_io
|
Create output filenames that work nicely with multiple input files from multiple directories (processing multiple subjects with one command) |
|
|
|
Return the longest common substring from the beginning of sa and sb. |
|
Concatenate list of inputs. |
|
Generate a list of output files paths based on input files and output strategies. |
|
|
|
Return a list of paths matching a pathname pattern. |
|
Create an IOIterator from the parameters. |
|
Create an IOIterator using introspection. |
|
workflows.reconst
|
Methods |
|
Implementation of Constant Solid Angle reconstruction method. |
|
Class for the Diffusion Kurtosis Model |
|
Methods |
|
Methods |
|
Methods |
|
Methods |
|
Methods |
|
Methods |
|
Diffusion Tensor |
|
Methods |
|
Selector function to switch between the 2-stage Trust-Region Reflective based NLLS fitting method (also containing the linear fit): trr and the Variable Projections based fitting method: varpro. |
|
Automatic estimation of single-shell single-tissue (ssst) response |
|
Axial Diffusivity (AD) of a diffusion tensor. |
|
Color fractional anisotropy of diffusion tensor |
|
Return Fractional anisotropy (FA) of a diffusion tensor. |
|
Geodesic anisotropy (GA) of a diffusion tensor. |
|
Mode (MO) of a diffusion tensor [1]. |
|
A general function for creating diffusion MR gradients. |
|
Safely evaluate an expression node or a string containing a Python expression. |
|
Load data and other information from a nifti file. |
|
Load only the data array from a nifti file. |
|
Returns the six lower triangular values of the tensor and a dummy variable if b0 is not None |
|
Mean Diffusivity (MD) of a diffusion tensor. |
|
Returns a Nifti1Image with a symmetric matrix intent |
|
Fit the model to data and computes peaks and metrics |
|
Save SH, directions, indices and values of peaks to Nifti. |
|
Radial Diffusivity (RD) of a diffusion tensor. |
|
Read b-values and b-vectors from disk. |
|
Save a data array into a nifti file. |
|
Save all important attributes of object PeaksAndMetrics in a PAM5 file (HDF5). |
|
Extract the diffusion tensor eigenvalues, the diffusion tensor eigenvector matrix, and the 15 independent elements of the kurtosis tensor from the model parameters estimated from the DKI model |
|
Issue a warning, or maybe ignore it or raise an exception. |
workflows.segment
|
Methods |
|
Methods |
|
Methods |
|
Methods |
Enum to simplify future change to convention |
|
|
Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). |
|
Methods |
|
Load data and other information from a nifti file. |
|
Load the stateful tractogram from any format (trk, tck, vtk, fib, dpy) |
|
Simple brain extraction tool method for images from DWI data. |
|
Save a data array into a nifti file. |
|
Save the stateful tractogram in any format (trk, tck, vtk, fib, dpy) |
|
Return the current time in seconds since the Epoch. |
workflows.stats
Methods |
|
|
Methods |
|
Methods |
|
Methods |
Enum to simplify future change to convention |
|
|
Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). |
|
Diffusion Tensor |
|
Methods |
|
Calculates dti measure (eg: FA, MD) per point on streamlines and |
|
Calculates assignment maps of the target bundle with reference to model bundle centroids. |
|
Multidimensional binary dilation with the given structuring element. |
|
Compute the bounding box of nonzero intensity voxels in the volume. |
|
Applies statistical analysis on bundles and saves the results in a directory specified by |
|
Calculates bundle shape similarity between two given bundles using bundle adjacency (BA) metric |
|
Return a list of paths matching a pathname pattern. |
|
A general function for creating diffusion MR gradients. |
|
Load data and other information from a nifti file. |
|
Load a PeaksAndMetrics HDF5 file (PAM5) |
|
Load the stateful tractogram from any format (trk, tck, vtk, fib, dpy) |
|
Return package-like thing and module setup for package name |
|
Peak_values function finds the generalized fractional anisotropy (gfa) |
|
Read b-values and b-vectors from disk. |
|
Save a data array into a nifti file. |
|
Save the stateful tractogram in any format (trk, tck, vtk, fib, dpy) |
|
Segment the cfa inside roi using the values from threshold as bounds. |
|
Return the current time in seconds since the Epoch. |
|
Apply affine transformation to streamlines |
workflows.tracking
cdef: |
|
A direction getter that returns the closest odf peak to previous tracking direction. |
|
Continuous map criterion (CMC) stopping criterion from [1]. |
|
Return direction of a sphere with the highest probability mass function (pmf). |
|
|
Methods |
|
|
|
Methods |
|
|
Randomly samples direction of a sphere based on probability mass function (pmf). |
|
Enum to simplify future change to convention |
|
|
Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). |
# Declarations from stopping_criterion.pxd bellow cdef: double threshold, interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] metric_map |
|
|
Methods |
|
Load data and other information from a nifti file. |
|
Load a PeaksAndMetrics HDF5 file (PAM5) |
|
Save the stateful tractogram in any format (trk, tck, vtk, fib, dpy) |
workflows.viz
|
Methods |
|
Methods |
|
Calculates assignment maps of the target bundle with reference to model bundle centroids. |
|
Write a standard nifti header from spatial attribute |
|
Interactive medical visualization - Invert the Horizon! |
|
Create colors for streamlines to be used in actor.line. |
|
Load data and other information from a nifti file. |
|
Load a PeaksAndMetrics HDF5 file (PAM5) |
|
Load the stateful tractogram from any format (trk, tck, vtk, fib, dpy) |
|
Convert Numpy color array to a vtk color array. |
|
Return package-like thing and module setup for package name |
|
Join two or more pathname components, inserting ‘/’ as needed. |
workflows.workflow
|
Methods |
|
Create an IOIterator using introspection. |
AffineMap
dipy.workflows.align.
AffineMap
(affine, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None)Bases: object
Methods
Return the value of the transformation, not a reference. |
|
|
Set the affine transform (operating in physical space). |
|
Transform the input image from co-domain to domain space. |
|
Transform the input image from domain to co-domain space. |
__init__
(affine, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None)AffineMap
Implements an affine transformation whose domain is given by domain_grid and domain_grid2world, and whose co-domain is given by codomain_grid and codomain_grid2world.
The actual transform is represented by the affine matrix, which operate in world coordinates. Therefore, to transform a moving image towards a static image, we first map each voxel (i,j,k) of the static image to world coordinates (x,y,z) by applying domain_grid2world. Then we apply the affine transform to (x,y,z) obtaining (x’, y’, z’) in moving image’s world coordinates. Finally, (x’, y’, z’) is mapped to voxel coordinates (i’, j’, k’) in the moving image by multiplying (x’, y’, z’) by the inverse of codomain_grid2world. The codomain_grid_shape is used analogously to transform the static image towards the moving image when calling transform_inverse.
If the domain/co-domain information is not provided (None) then the sampling information needs to be specified each time the transform or transform_inverse is called to transform images. Note that such sampling information is not necessary to transform points defined in physical space, such as stream lines.
the matrix defining the affine transform, where dim is the dimension of the space this map operates in (2 for 2D images, 3 for 3D images). If None, then self represents the identity transformation.
the shape of the default domain sampling grid. When transform is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None, then the sampling grid shape must be specified each time the transform method is called.
the grid-to-world transform associated with the domain grid. If None (the default), then the grid-to-world transform is assumed to be the identity.
the shape of the default co-domain sampling grid. When transform_inverse is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None (the default), then the sampling grid shape must be specified each time the transform_inverse method is called.
the grid-to-world transform associated with the co-domain grid. If None (the default), then the grid-to-world transform is assumed to be the identity.
get_affine
()Return the value of the transformation, not a reference.
Copy of the transform, not a reference.
set_affine
(affine)Set the affine transform (operating in physical space).
Also sets self.affine_inv - the inverse of affine, or None if there is no inverse.
the matrix representing the affine transform operating in physical space. The domain and co-domain information remains unchanged. If None, then self represents the identity transformation.
transform
(image, interpolation='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)Transform the input image from co-domain to domain space.
By default, the transformed image is sampled at a grid defined by self.domain_shape and self.domain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.
the image to be transformed
the type of interpolation to be used, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor
the grid-to-world transform associated with image. If None (the default), then the grid-to-world transform is assumed to be the identity.
the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).
the grid-to-world transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the grid-to-world transform is assumed to be the identity.
If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform.
self.codomain_shape
the transformed image, sampled at the requested grid
transform_inverse
(image, interpolation='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)Transform the input image from domain to co-domain space.
By default, the transformed image is sampled at a grid defined by self.codomain_shape and self.codomain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.
the image to be transformed
the type of interpolation to be used, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor
the grid-to-world transform associated with image. If None (the default), then the grid-to-world transform is assumed to be the identity.
the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).
the grid-to-world transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the grid-to-world transform is assumed to be the identity.
If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform.
self.codomain_shape
the transformed image, sampled at the requested grid
AffineRegistration
dipy.workflows.align.
AffineRegistration
(metric=None, level_iters=None, sigmas=None, factors=None, method='L-BFGS-B', ss_sigma_factor=None, options=None, verbosity=1)Bases: object
Methods
|
Start the optimization process. |
__init__
(metric=None, level_iters=None, sigmas=None, factors=None, method='L-BFGS-B', ss_sigma_factor=None, options=None, verbosity=1)Initialize an instance of the AffineRegistration class.
an instance of a metric. The default is None, implying the Mutual Information metric with default settings.
the number of iterations at each scale of the scale space. level_iters[0] corresponds to the coarsest scale, level_iters[-1] the finest, where n is the length of the sequence. By default, a 3-level scale space with iterations sequence equal to [10000, 1000, 100] will be used.
custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0].
custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1].
optimization method to be used. If Scipy version < 0.12, then only L-BFGS-B is available. Otherwise, method can be any gradient-based method available in dipy.core.Optimize: CG, BFGS, Newton-CG, dogleg or trust-ncg. The default is ‘L-BFGS-B’.
If None, this parameter is not used and an isotropic scale space with the given factors and sigmas will be built. If not None, an anisotropic scale space will be used by automatically selecting the smoothing sigmas along each axis according to the voxel dimensions of the given image. The ss_sigma_factor is used to scale the automatically computed sigmas. For example, in the isotropic case, the sigma of the kernel will be \(factor * (2 ^ i)\) where \(i = 1, 2, ..., n_scales - 1\) is the scale (the finest resolution image \(i=0\) is never smoothed). The default is None.
extra optimization options. The default is None, implying no extra options are passed to the optimizer.
Set the verbosity level of the algorithm: 0 : do not print anything 1 : print information about the current status of the algorithm 2 : print high level information of the components involved in
the registration that can be used to detect a failing component.
of a bug.
Default: 1
docstring_addendum
= 'verbosity: int (one of {0, 1, 2, 3}), optional\n Set the verbosity level of the algorithm:\n 0 : do not print anything\n 1 : print information about the current status of the algorithm\n 2 : print high level information of the components involved in\n the registration that can be used to detect a failing\n component.\n 3 : print as much information as possible to isolate the cause\n of a bug.\n Default: 1\n 'optimize
(static, moving, transform, params0, static_grid2world=None, moving_grid2world=None, starting_affine=None, ret_metric=False)Start the optimization process.
the image to be used as reference during optimization.
the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
the transformation with respect to whose parameters the gradient must be computed
parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.
the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.
the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.
‘mass’: align centers of gravity ‘voxel-origin’: align physical coordinates of voxel (0,0,0) ‘centers’: align physical coordinates of central voxels
array, shape (dim+1, dim+1).
Start from identity.
The default is None.
if True, it returns the parameters for measuring the similarity between the images (default ‘False’). The metric containing optimal parameters and the distance between the images.
the affine resulting affine transformation
the optimal parameters (translation, rotation shear etc.)
the value of the function at the optimal parameters.
AffineTransform3D
dipy.workflows.align.
AffineTransform3D
Bases: dipy.align.transforms.Transform
Methods
|
Parameter values corresponding to the identity transform |
|
Jacobian function of this transform |
|
Matrix representation of this transform with the given parameters |
get_dim |
|
get_number_of_parameters |
ApplyTransformFlow
dipy.workflows.align.
ApplyTransformFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
run
(static_image_files, moving_image_files, transform_map_file, transform_type='affine', out_dir='', out_file='transformed.nii.gz')Path of the static image file.
Path of the moving image(s). It can be a single image or a folder containing multiple images.
For the affine case, it should be a text(*.txt) file containing the affine matrix. For the diffeomorphic case, it should be a nifti file containing the mapping displacement field in each voxel with this shape (x, y, z, 3, 2)
Select the transformation type to apply between ‘affine’ or ‘diffeomorphic’. (default affine)
Directory to save the transformed files (default ‘’).
prevent the output files from being overwritten.
CCMetric
dipy.workflows.align.
CCMetric
(dim, sigma_diff=2.0, radius=4)Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. |
|
Computes one step bringing the moving image towards the static. |
|
Frees the resources allocated during initialization |
|
Numerical value assigned by this metric to the current image pair |
|
Prepares the metric to compute one displacement field iteration. |
|
|
Informs the metric how many pyramid levels are above the current one |
|
Informs the metric how many pyramid levels are below the current one |
|
Sets the moving image being compared against the static one. |
|
Sets the static image being compared against the moving one. |
|
This is called by the optimizer just after setting the moving image |
|
This is called by the optimizer just after setting the static image. |
__init__
(dim, sigma_diff=2.0, radius=4)Normalized Cross-Correlation Similarity metric.
the dimension of the image domain
be applied to the update field at each iteration
the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation
compute_backward
()Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
compute_forward
()Computes one step bringing the moving image towards the static.
Computes the update displacement field to be used for registration of the moving image towards the static image
get_energy
()Numerical value assigned by this metric to the current image pair
Returns the Cross Correlation (data term) energy computed at the largest iteration
initialize_iteration
()Prepares the metric to compute one displacement field iteration.
Pre-computes the cross-correlation factors for efficient computation of the gradient of the Cross Correlation w.r.t. the displacement field. It also pre-computes the image gradients in the physical space by re-orienting the gradients in the voxel space using the corresponding affine transformations.
DiffeomorphicMap
dipy.workflows.align.
DiffeomorphicMap
(dim, disp_shape, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None)Bases: object
Methods
|
Creates a zero displacement field |
Inversion error of the displacement fields |
|
|
Expands the displacement fields from current shape to new_shape |
Deformation field to transform an image in the backward direction |
|
Deformation field to transform an image in the forward direction |
|
Constructs a simplified version of this Diffeomorhic Map |
|
|
Try to interpret obj as a matrix |
|
Inverse of this DiffeomorphicMap instance |
Shallow copy of this DiffeomorphicMap instance |
|
|
Warps an image in the forward direction |
|
Warps an image in the backward direction |
|
Composition of this DiffeomorphicMap with a given endomorphism |
__init__
(dim, disp_shape, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None)DiffeomorphicMap
Implements a diffeomorphic transformation on the physical space. The deformation fields encoding the direct and inverse transformations share the same domain discretization (both the discretization grid shape and voxel-to-space matrix). The input coordinates (physical coordinates) are first aligned using prealign, and then displaced using the corresponding vector field interpolated at the aligned coordinates.
the transformation’s dimension
the number of slices (if 3D), rows and columns of the deformation field’s discretization
grid and space
the number of slices (if 3D), rows and columns of the default discretization of this map’s domain
the default voxel-to-space transformation between this map’s discretization and physical space
the number of slices (if 3D), rows and columns of the images that are ‘normally’ warped using this transformation in the forward direction (this will provide default transformation parameters to warp images under this transformation). By default, we assume that the inverse transformation is ‘normally’ used to warp images with the same discretization and voxel-to-space transformation as the deformation field grid.
the voxel-to-space transformation of images that are ‘normally’ warped using this transformation (in the forward direction).
the linear transformation to be applied to align input images to the reference space before warping under the deformation field.
allocate
()Creates a zero displacement field
Creates a zero displacement field (the identity transformation).
compute_inversion_error
()Inversion error of the displacement fields
Estimates the inversion error of the displacement fields by computing statistics of the residual vectors obtained after composing the forward and backward displacement fields.
the displacement field resulting from composing the forward and backward displacement fields of this transformation (the residual should be zero for a perfect diffeomorphism)
statistics from the norms of the vectors of the residual displacement field: maximum, mean and standard deviation
Notes
Since the forward and backward displacement fields have the same discretization, the final composition is given by
comp[i] = forward[ i + Dinv * backward[i]]
where Dinv is the space-to-grid transformation of the displacement fields
expand_fields
(expand_factors, new_shape)Expands the displacement fields from current shape to new_shape
Up-samples the discretization of the displacement fields to be of new_shape shape.
the factors scaling current spacings (voxel sizes) to spacings in the expanded discretization.
the shape of the arrays holding the up-sampled discretization
get_backward_field
()Deformation field to transform an image in the backward direction
Returns the deformation field that must be used to warp an image under this transformation in the backward direction (note the ‘is_inverse’ flag).
get_forward_field
()Deformation field to transform an image in the forward direction
Returns the deformation field that must be used to warp an image under this transformation in the forward direction (note the ‘is_inverse’ flag).
get_simplified_transform
()Constructs a simplified version of this Diffeomorhic Map
The simplified version incorporates the pre-align transform, as well as the domain and codomain affine transforms into the displacement field. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. As a result, self.prealign, self.disp_grid2world, self.domain_grid2world and self.codomain affine will be None (denoting Identity) in the resulting diffeomorphic map.
interpret_matrix
(obj)Try to interpret obj as a matrix
Some operations are performed faster if we know in advance if a matrix is the identity (so we can skip the actual matrix-vector multiplication). This function returns None if the given object is None or the ‘identity’ string. It returns the same object if it is a numpy array. It raises an exception otherwise.
any object
the same object given as argument if obj is None or a numpy array. None if obj is the ‘identity’ string.
inverse
()Inverse of this DiffeomorphicMap instance
Returns a diffeomorphic map object representing the inverse of this transformation. The internal arrays are not copied but just referenced.
the inverse of this diffeomorphic map.
shallow_copy
()Shallow copy of this DiffeomorphicMap instance
Creates a shallow copy of this diffeomorphic map (the arrays are not copied but just referenced)
the shallow copy of this diffeomorphic map
transform
(image, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)Warps an image in the forward direction
Transforms the input image under this transformation in the forward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform(…) warps the image forwards, else it warps the image backwards).
the image to be warped under this transformation in the forward direction
the type of interpolation to be used for warping, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor
the transformation bringing world (space) coordinates to voxel coordinates of the image given as input
the number of slices, rows and columns of the desired warped image
warped image to physical space
the warped image under this transformation in the forward direction
Notes
See _warp_forward and _warp_backward documentation for further information.
transform_inverse
(image, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)Warps an image in the backward direction
Transforms the input image under this transformation in the backward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform_inverse(…) warps the image backwards, else it warps the image forwards)
the image to be warped under this transformation in the forward direction
the type of interpolation to be used for warping, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor
the transformation bringing world (space) coordinates to voxel coordinates of the image given as input
the number of slices, rows, and columns of the desired warped image
warped image to physical space
warped image under this transformation in the backward direction
Notes
See _warp_forward and _warp_backward documentation for further information.
warp_endomorphism
(phi)Composition of this DiffeomorphicMap with a given endomorphism
Creates a new DiffeomorphicMap C with the same properties as self and composes its displacement fields with phi’s corresponding fields. The resulting diffeomorphism is of the form C(x) = phi(self(x)) with inverse C^{-1}(y) = self^{-1}(phi^{-1}(y)). We assume that phi is an endomorphism with the same discretization and domain affine as self to ensure that the composition inherits self’s properties (we also assume that the pre-aligning matrix of phi is None or identity).
the endomorphism to be warped by this diffeomorphic map
endomorphism given as input
Notes
The problem with our current representation of a DiffeomorphicMap is that the set of Diffeomorphism that can be represented this way (a pre-aligning matrix followed by a non-linear endomorphism given as a displacement field) is not closed under the composition operation.
Supporting a general DiffeomorphicMap class, closed under composition, may be extremely costly computationally, and the kind of transformations we actually need for Avants’ mid-point algorithm (SyN) are much simpler.
EMMetric
dipy.workflows.align.
EMMetric
(dim, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type='gauss_newton')Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. |
|
|
Demons step for EM metric |
Computes one step bringing the reference image towards the static. |
|
|
Computes the Gauss-Newton energy minimization step |
Frees the resources allocated during initialization |
|
The numerical value assigned by this metric to the current image pair |
|
Prepares the metric to compute one displacement field iteration. |
|
|
Informs the metric how many pyramid levels are above the current one |
|
Informs the metric how many pyramid levels are below the current one |
|
Sets the moving image being compared against the static one. |
|
Sets the static image being compared against the moving one. |
This is called by the optimizer just after setting the moving image. |
|
This is called by the optimizer just after setting the static image. |
__init__
(dim, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type='gauss_newton')Expectation-Maximization Metric
Similarity metric based on the Expectation-Maximization algorithm to handle multi-modal images. The transfer function is modeled as a set of hidden random variables that are estimated at each iteration of the algorithm.
the dimension of the image domain
smoothness parameter, the larger the value the smoother the deformation field
number of iterations to be performed at each level of the multi- resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric)
variables in the EM algorithm)
if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image.
the optimization schedule to be used in the multi-resolution Gauss-Seidel optimization algorithm (not used if Demons Step is selected)
compute_backward
()Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
compute_demons_step
(forward_step=True)Demons step for EM metric
if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Demons step
compute_forward
()Computes one step bringing the reference image towards the static.
Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm
compute_gauss_newton_step
(forward_step=True)Computes the Gauss-Newton energy minimization step
Computes the Newton step to minimize this energy, i.e., minimizes the linearized energy function with respect to the regularized displacement field (this step does not require post-smoothing, as opposed to the demons step, which does not include regularization). To accelerate convergence we use the multi-grid Gauss-Seidel algorithm proposed by Bruhn and Weickert et al [Bruhn05]
if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Newton step
References
estimation: combining highest accuracy with real-time performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
get_energy
()The numerical value assigned by this metric to the current image pair
Returns the EM (data term) energy computed at the largest iteration
initialize_iteration
()Prepares the metric to compute one displacement field iteration.
Pre-computes the transfer functions (hidden random variables) and variances of the estimators. Also pre-computes the gradient of both input images. Note that once the images are transformed to the opposite modality, the gradient of the transformed images can be used with the gradient of the corresponding modality in the same fashion as diff-demons does for mono-modality images. If the flag self.use_double_gradient is True these gradients are averaged.
use_moving_image_dynamics
(original_moving_image, transformation)This is called by the optimizer just after setting the moving image.
EMMetric takes advantage of the image dynamics by computing the current moving image mask from the original_moving_image mask (warped by nearest neighbor interpolation)
the original moving image from which the current moving image was generated, the current moving image is the one that was provided via ‘set_moving_image(…)’, which may not be the same as the original moving image but a warped version of it.
the transformation that was applied to the original_moving_image to generate the current moving image
use_static_image_dynamics
(original_static_image, transformation)This is called by the optimizer just after setting the static image.
EMMetric takes advantage of the image dynamics by computing the current static image mask from the originalstaticImage mask (warped by nearest neighbor interpolation)
the original static image from which the current static image was generated, the current static image is the one that was provided via ‘set_static_image(…)’, which may not be the same as the original static image but a warped version of it (even the static image changes during Symmetric Normalization, not only the moving one).
the transformation that was applied to the original_static_image to generate the current static image
ImageRegistrationFlow
dipy.workflows.align.
ImageRegistrationFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
The registration workflow is organized as a collection of different functions. The user can intend to use only one type of registration (such as center of mass or rigid body registration only).
Alternatively, a registration can be done in a progressive manner. For example, using affine registration with progressive set to ‘True’ will involve center of mass, translation, rigid body and full affine registration. Whereas, when progressive is False the registration will include only center of mass and affine registration. The progressive registration will be slower but will improve the quality.
This can be controlled by using the progressive flag (True by default).
Methods
|
Function for full affine registration. |
|
Function for the center of mass based image registration. |
|
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Function to apply the transformation. |
|
Function for rigid body based image registration. |
|
|
|
Function for translation based registration. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
affine
(static, static_grid2world, moving, moving_grid2world, affreg, params0, progressive)Function for full affine registration.
the image to be used as reference during optimization.
the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.
the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.
parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.
Flag to enable or disable the progressive registration. (defa ult True)
center_of_mass
(static, static_grid2world, moving, moving_grid2world)Function for the center of mass based image registration.
the image to be used as reference during optimization.
the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.
the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.
perform_transformation
(static, static_grid2world, moving, moving_grid2world, affreg, params0, transform, affine)Function to apply the transformation.
the image to be used as reference during optimization.
the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.
the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.
parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.
rigid
(static, static_grid2world, moving, moving_grid2world, affreg, params0, progressive)Function for rigid body based image registration.
the image to be used as reference during optimization.
the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.
the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.
parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.
Flag to enable or disable the progressive registration. (defa ult True)
run
(static_img_files, moving_img_files, transform='affine', nbins=32, sampling_prop=None, metric='mi', level_iters=[10000, 1000, 100], sigmas=[3.0, 1.0, 0.0], factors=[4, 2, 1], progressive=True, save_metric=False, out_dir='', out_moved='moved.nii.gz', out_affine='affine.txt', out_quality='quality_metric.txt')Path to the static image file.
Path to the moving image file.
affine: full affine including translation, rotation, shearing and scaling (default ‘affine’).
(default ‘32’).
‘None’ implies all voxels (default ‘None’).
(default ‘mi’ , Mutual Information metric).
level_iters[0] corresponds to the coarsest scale, level_iters[-1] the finest, where n is the length of the
sequence. By default, a 3-level scale space with iterations sequence equal to [10000, 1000, 100] will be used.
for each scale). By default, the sequence of sigmas will be [3, 1, 0].
scale). By default, the sequence of factors will be [4, 2, 1].
Enable/Disable the progressive registration (default ‘True’).
If true, quality assessment metric are saved in ‘quality_metric.txt’ (default ‘False’).
(default ‘’).
(default ‘moved.nii.gz’).
(default ‘affine.txt’).
metric (default ‘quality_metric.txt’).
translate
(static, static_grid2world, moving, moving_grid2world, affreg, params0)Function for translation based registration.
the image to be used as reference during optimization.
the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.
the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.
parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.
MutualInformationMetric
dipy.workflows.align.
MutualInformationMetric
(nbins=32, sampling_proportion=None)Bases: object
Methods
|
Numeric value of the negative Mutual Information. |
|
Numeric value of the metric and its gradient at given parameters. |
|
Numeric value of the metric’s gradient at the given parameters. |
|
Prepare the metric to compute intensity densities and gradients. |
__init__
(nbins=32, sampling_proportion=None)Initialize an instance of the Mutual Information metric.
This class implements the methods required by Optimizer to drive the registration process.
the number of bins to be used for computing the intensity histograms. The default is 32.
There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If sampling_proportion is None, then dense sampling is used. If sampling_proportion is a floating point value in (0,1] then sparse sampling is used, where sampling_proportion specifies the proportion of voxels to be used. The default is None.
Notes
Since we use linear interpolation, images are not, in general, differentiable at exact voxel coordinates, but they are differentiable between voxel coordinates. When using sparse sampling, selected voxels are slightly moved by adding a small random displacement within one voxel to prevent sampling points from being located exactly at voxel coordinates. When using dense sampling, this random displacement is not applied.
distance
(params)Numeric value of the negative Mutual Information.
We need to change the sign so we can use standard minimization algorithms.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters
distance_and_gradient
(params)Numeric value of the metric and its gradient at given parameters.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters
the gradient of the negative Mutual Information
gradient
(params)Numeric value of the metric’s gradient at the given parameters.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the gradient of the negative Mutual Information
setup
(transform, static, moving, static_grid2world=None, moving_grid2world=None, starting_affine=None)Prepare the metric to compute intensity densities and gradients.
The histograms will be setup to compute probability densities of intensities within the minimum and maximum values of static and moving
the transformation with respect to whose parameters the gradient must be computed
static image
moving image. The dimensions of the static (S, R, C) and moving (S’, R’, C’) images do not need to be the same.
the grid-to-space transform of the static image. The default is None, implying the transform is the identity.
the grid-to-space transform of the moving image. The default is None, implying the spacing along all axes is 1.
the pre-aligning matrix (an affine transform) that roughly aligns the moving image towards the static image. If None, no pre-alignment is performed. If a pre-alignment matrix is available, it is recommended to provide this matrix as starting_affine instead of manually transforming the moving image to reduce interpolation artifacts. The default is None, implying no pre-alignment is performed.
ResliceFlow
dipy.workflows.align.
ResliceFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Reslice data with new voxel resolution defined by |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, new_vox_size, order=1, mode='constant', cval=0, num_processes=1, out_dir='', out_resliced='resliced.nii.gz')Reslice data with new voxel resolution defined by new_vox_sz
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
new voxel size
order of interpolation, from 0 to 5, for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don’t want any smoothing 0 is the option you need (default 1)
Points outside the boundaries of the input are filled according to the given mode ‘constant’, ‘nearest’, ‘reflect’ or ‘wrap’ (default ‘constant’)
Value used for points outside the boundaries of the input if mode=’constant’ (default 0)
Split the calculation to a pool of children processes. This only applies to 4D data arrays. If a positive integer then it defines the size of the multiprocessing pool that will be used. If 0, then the size of the pool will equal the number of cores available. (default 1)
Output directory (default input file directory)
Name of the resliced dataset to be saved (default ‘resliced.nii.gz’)
RigidTransform3D
dipy.workflows.align.
RigidTransform3D
Bases: dipy.align.transforms.Transform
Methods
|
Parameter values corresponding to the identity transform |
|
Jacobian function of this transform |
|
Matrix representation of this transform with the given parameters |
get_dim |
|
get_number_of_parameters |
__init__
()Rigid transform in 3D (rotation + translation) The parameter vector theta of length 6 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis
SSDMetric
dipy.workflows.align.
SSDMetric
(dim, smooth=4, inner_iter=10, step_type='demons')Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. |
|
|
Demons step for SSD metric |
Computes one step bringing the reference image towards the static. |
|
|
Computes the Gauss-Newton energy minimization step |
Nothing to free for the SSD metric |
|
The numerical value assigned by this metric to the current image pair |
|
Prepares the metric to compute one displacement field iteration. |
|
|
Informs the metric how many pyramid levels are above the current one |
|
Informs the metric how many pyramid levels are below the current one |
|
Sets the moving image being compared against the static one. |
|
Sets the static image being compared against the moving one. |
|
This is called by the optimizer just after setting the moving image |
|
This is called by the optimizer just after setting the static image. |
__init__
(dim, smooth=4, inner_iter=10, step_type='demons')Sum of Squared Differences (SSD) Metric
Similarity metric for (mono-modal) nonlinear image registration defined by the sum of squared differences (SSD)
the dimension of the image domain
smoothness parameter, the larger the value the smoother the deformation field
number of iterations to be performed at each level of the multi- resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric)
the displacement field step to be computed when ‘compute_forward’ and ‘compute_backward’ are called. Either ‘demons’ or ‘gauss_newton’
compute_backward
()Computes one step bringing the static image towards the moving.
Computes the updated displacement field to be used for registration of the static image towards the moving image
compute_demons_step
(forward_step=True)Demons step for SSD metric
Computes the demons step proposed by Vercauteren et al.[Vercauteren09] for the SSD metric.
if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Demons step
References
Nicholas Ayache, “Diffeomorphic Demons: Efficient Non-parametric Image Registration”, Neuroimage 2009
compute_forward
()Computes one step bringing the reference image towards the static.
Computes the update displacement field to be used for registration of the moving image towards the static image
compute_gauss_newton_step
(forward_step=True)Computes the Gauss-Newton energy minimization step
Minimizes the linearized energy function (Newton step) defined by the sum of squared differences of corresponding pixels of the input images with respect to the displacement field.
if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
if forward_step==True, the forward SSD Gauss-Newton step, else, the backward step
SlrWithQbxFlow
dipy.workflows.align.
SlrWithQbxFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Streamline-based linear registration. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(static_files, moving_files, x0='affine', rm_small_clusters=50, qbx_thr=[40, 30, 20, 15], num_threads=None, greater_than=50, less_than=250, nb_pts=20, progressive=True, out_dir='', out_moved='moved.trk', out_affine='affine.txt', out_stat_centroids='static_centroids.trk', out_moving_centroids='moving_centroids.trk', out_moved_centroids='moved_centroids.trk')Streamline-based linear registration.
For efficiency we apply the registration on cluster centroids and remove small clusters.
rigid, similarity or affine transformation model (default affine)
Remove clusters that have less than rm_small_clusters (default 50)
Thresholds for QuickBundlesX (default [40, 30, 20, 15])
Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
Keep streamlines that have length greater than this value (default 50)
Keep streamlines have length less than this value (default 250)
Number of points for discretizing each streamline (default 20)
(default True)
Output directory (default input file directory)
Filename of moved tractogram (default ‘moved.trk’)
Filename of affine for SLR transformation (default ‘affine.txt’)
Filename of static centroids (default ‘static_centroids.trk’)
Filename of moving centroids (default ‘moving_centroids.trk’)
Filename of moved centroids (default ‘moved_centroids.trk’)
Notes
The order of operations is the following. First short or long streamlines are removed. Second the tractogram or a random selection of the tractogram is clustered with QuickBundlesX. Then SLR [Garyfallidis15] is applied.
References
Garyfallidis et al. “Robust and efficient linear
registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015
Garyfallidis et al., “Direct native-space fiber
bundle alignment for group comparisons”, ISMRM, 2014.
Garyfallidis et al. Recognition of white matter
bundles using local and global streamline-based registration and clustering, NeuroImage, 2017.
SymmetricDiffeomorphicRegistration
dipy.workflows.align.
SymmetricDiffeomorphicRegistration
(metric, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, callback=None)Bases: dipy.align.imwarp.DiffeomorphicRegistration
Methods
|
Return the resulting diffeomorphic map. |
|
Starts the optimization |
|
Sets the number of iterations at each pyramid level |
|
Composition of the current displacement field with the given field |
__init__
(metric, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, callback=None)Symmetric Diffeomorphic Registration (SyN) Algorithm
Performs the multi-resolution optimization algorithm for non-linear registration using a given similarity metric.
the metric to be optimized
the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used)
the optimization will stop when the estimated derivative of the energy profile w.r.t. time falls below this threshold
the number of iterations to be performed by the displacement field inversion algorithm
the length of the maximum displacement vector of the update displacement field at each iteration
parameter of the scale-space smoothing kernel. For example, the std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, …, n_scales is the scale
the displacement field inversion algorithm will stop iterating when the inversion error falls below this threshold
a function receiving a SymmetricDiffeomorphicRegistration object to be called after each iteration (this optimizer will call this function passing self as parameter)
get_map
()Return the resulting diffeomorphic map.
Returns the DiffeomorphicMap registering the moving image towards the static image.
optimize
(static, moving, static_grid2world=None, moving_grid2world=None, prealign=None)Starts the optimization
the image to be used as reference during optimization. The displacement fields will have the same discretization as the static image.
the image to be used as “moving” during optimization. Since the deformation fields’ discretization is the same as the static image, it is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘prealign’ matrix
the voxel-to-space transformation associated to the static image
the voxel-to-space transformation associated to the moving image
the affine transformation (operating on the physical space) pre-aligning the moving image towards the static
the diffeomorphic map that brings the moving image towards the static one in the forward direction (i.e. by calling static_to_ref.transform) and the static image towards the moving one in the backward direction (i.e. by calling static_to_ref.transform_inverse).
update
(current_displacement, new_displacement, disp_world2grid, time_scaling)Composition of the current displacement field with the given field
Interpolates new displacement at the locations defined by current_displacement. Equivalently, computes the composition C of the given displacement fields as C(x) = B(A(x)), where A is current_displacement and B is new_displacement. This function is intended to be used with deformation fields of the same sampling (e.g. to be called by a registration algorithm).
the displacement field defining where to interpolate new_displacement
the displacement field to be warped by current_displacement
the space-to-grid transform associated with the displacements’ grid (we assume that both displacements are discretized over the same grid)
scaling factor applied to d2. The effect may be interpreted as moving d1 displacements along a factor (time_scaling) of d2.
the warped displacement field
SynRegistrationFlow
dipy.workflows.align.
SynRegistrationFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
run
(static_image_files, moving_image_files, prealign_file='', inv_static=False, level_iters=[10, 10, 5], metric='cc', mopt_sigma_diff=2.0, mopt_radius=4, mopt_smooth=0.0, mopt_inner_iter=0, mopt_q_levels=256, mopt_double_gradient=True, mopt_step_type='', step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, out_dir='', out_warped='warped_moved.nii.gz', out_inv_static='inc_static.nii.gz', out_field='displacement_field.nii.gz')Path of the static image file.
Path to the moving image file.
affine matrix.
Apply the inverse mapping to the static image (default ‘False’).
By default, a 3-level scale space with iterations sequence equal to [10, 10, 5] will be used. The 0-th level corresponds to the finest resolution.
The metric to be used (Default cc, ‘Cross Correlation metric’). metric available: cc (Cross Correlation), ssd (Sum Squared Difference), em (Expectation-Maximization).
Metric option applied on Cross correlation (CC). The standard deviation of the Gaussian smoothing kernel to be applied to the update field at each iteration (default 2.0)
Metric option applied on Cross correlation (CC). the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation. (default 4)
Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). Smoothness parameter, the larger the value the smoother the deformation field. (default 1.0 for EM, 4.0 for SSD)
Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). This is number of iterations to be performed at each level of the multi-resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric). Default 5 for EM, 10 for SSD.
Metric option applied on Expectation Maximization (EM). Number of quantization levels (Default: 256 for EM)
Metric option applied on Expectation Maximization (EM). if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image.
Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). The optimization schedule to be used in the multi-resolution Gauss-Seidel optimization algorithm (not used if Demons Step is selected). Possible value: (‘gauss_newton’, ‘demons’). default: ‘gauss_newton’ for EM, ‘demons’ for SSD.
displacement field at each iteration.
std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, …, n_scales is the scale.
energy profile w.r.t. time falls below this threshold.
inversion algorithm.
when the inversion error falls below this threshold.
Directory to save the transformed files (default ‘’).
Name of the warped file. (default ‘warped_moved.nii.gz’).
inverse mapping (default ‘inv_static.nii.gz’).
Name of the file to save the diffeomorphic map. (default ‘displacement_field.nii.gz’)
TranslationTransform3D
dipy.workflows.align.
TranslationTransform3D
Bases: dipy.align.transforms.Transform
Methods
|
Parameter values corresponding to the identity transform |
|
Jacobian function of this transform |
|
Matrix representation of this transform with the given parameters |
get_dim |
|
get_number_of_parameters |
Workflow
dipy.workflows.align.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.align.
check_dimensions
(static, moving)Check the dimensions of the input images.
the image to be used as reference during optimization.
the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
dipy.workflows.align.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
dipy.workflows.align.
reslice
(data, affine, zooms, new_zooms, order=1, mode='constant', cval=0, num_processes=1)Reslice data with new voxel resolution defined by new_zooms
3d volume or 4d volume with datasets
mapping from voxel coordinates to world coordinates
voxel size for (i,j,k) dimensions
new voxel size for (i,j,k) after resampling
order of interpolation for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don’t want any smoothing 0 is the option you need.
Points outside the boundaries of the input are filled according to the given mode.
Value used for points outside the boundaries of the input if mode=’constant’.
Split the calculation to a pool of children processes. This only applies to 4D data arrays. If a positive integer then it defines the size of the multiprocessing pool that will be used. If 0, then the size of the pool will equal the number of cores available.
datasets resampled into isotropic voxel size
new affine for the resampled image
Examples
>>> from dipy.io.image import load_nifti
>>> from dipy.align.reslice import reslice
>>> from dipy.data import get_fnames
>>> f_name = get_fnames('aniso_vox')
>>> data, affine, zooms = load_nifti(f_name, return_voxsize=True)
>>> data.shape == (58, 58, 24)
True
>>> zooms
(4.0, 4.0, 5.0)
>>> new_zooms = (3.,3.,3.)
>>> new_zooms
(3.0, 3.0, 3.0)
>>> data2, affine2 = reslice(data, affine, zooms, new_zooms)
>>> data2.shape == (77, 77, 40)
True
dipy.workflows.align.
save_nifti
(fname, data, affine, hdr=None)Save a data array into a nifti file.
The full path to the file to be saved.
The array with the data to save.
The affine transform associated with the file.
May contain additional information to store in the file header.
dipy.workflows.align.
slr_with_qbx
(static, moving, x0='affine', rm_small_clusters=50, maxiter=100, select_random=None, verbose=False, greater_than=50, less_than=250, qbx_thr=[40, 30, 20, 15], nb_pts=20, progressive=True, rng=None, num_threads=None)Utility function for registering large tractograms.
For efficiency, we apply the registration on cluster centroids and remove small clusters.
rigid, similarity or affine transformation model (default affine)
Remove clusters that have less than rm_small_clusters (default 50)
If not, None selects a random number of streamlines to apply clustering Default None.
If True, logs information about optimization. Default: False
Keep streamlines that have length greater than this value (default 50)
Keep streamlines have length less than this value (default 250)
Thresholds for QuickBundlesX (default [40, 30, 20, 15])
Number of points for discretizing each streamline (default 20)
(default True)
If None creates RandomState in function.
Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.
Notes
The order of operations is the following. First short or long streamlines are removed. Second, the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR [Garyfallidis15] is applied.
References
Garyfallidis et al. “Robust and efficient linear
registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015 .. [R890e584ccf15-Garyfallidis14] Garyfallidis et al., “Direct native-space fiber
bundle alignment for group comparisons”, ISMRM, 2014.
Garyfallidis et al. Recognition of white matter
bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
dipy.workflows.align.
transform_centers_of_mass
(static, static_grid2world, moving, moving_grid2world)Transformation to align the center of mass of the input images.
static image
the voxel-to-space transformation of the static image
moving image
the voxel-to-space transformation of the moving image
the affine transformation (translation only, in this case) aligning the center of mass of the moving image towards the one of the static image
dipy.workflows.align.
transform_streamlines
(streamlines, mat, in_place=False)Apply affine transformation to streamlines
Streamlines object
transformation matrix
If True then change data in place. Be careful changes input streamlines.
Sequence transformed 2D ndarrays of shape[-1]==3
IntrospectiveArgumentParser
dipy.workflows.base.
IntrospectiveArgumentParser
(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)Bases: argparse.ArgumentParser
Methods
|
|
|
Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method. |
|
|
|
Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method. |
|
Prints a usage message incorporating the message to stderr and exits. |
|
|
|
|
|
Returns the parsed arguments as a dictionary that will be used as a workflow’s run method arguments. |
|
|
|
|
|
|
|
add_argument_group |
|
add_description |
|
add_epilogue |
|
add_mutually_exclusive_group |
|
convert_arg_line_to_args |
|
format_help |
|
get_default |
|
parse_intermixed_args |
|
parse_known_args |
|
parse_known_intermixed_args |
|
print_help |
|
show_argument |
|
update_argument |
__init__
(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)Augmenting the argument parser to allow automatic creation of arguments from workflows
The name of the program (default: sys.argv[0])
A usage message (default: auto-generated from arguments)
A description of what the program does
Text following the argument descriptions
Parsers whose arguments should be copied into this one
HelpFormatter class for printing help messages
Characters that prefix optional arguments
Characters that prefix files containing additional arguments
The default value for all arguments
String indicating how to handle conflicts
Add a -h/-help option
add_sub_flow_args
(sub_flows)Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method. Only the optional input parameters are extracted for these as they are treated as sub workflows.
Workflows to inspect.
add_workflow
(workflow)Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method. Then add these parameters to the current arparser’s own params to parse. If the workflow is of type combined_workflow, the optional input parameters of its sub workflows will also be added.
Workflow from which to infer parameters.
NumpyDocString
dipy.workflows.base.
NumpyDocString
(docstring, config={})Bases: object
CombinedWorkflow
dipy.workflows.combined_workflow.
CombinedWorkflow
(output_strategy='append', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
|
Returns the sub flow’s optional arguments merged with those passed as params in kwargs. |
|
Return A short name for the workflow used to subdivide. |
Returns a list of tuples (sub flow name, sub flow run method, sub flow short name) to be used in the sub flow parameters extraction. |
|
|
Check if a file will be overwritten upon processing the inputs. |
|
Execute the workflow. |
|
Runs the sub flow with the optional parameters passed via the command line. |
|
Sets the self._optionals variable with all sub flow arguments that were passed in the commandline. |
__init__
(output_strategy='append', mix_names=False, force=False, skip=False)Workflow that combines multiple workflows. The workflow combined together are referred as sub flows in this class.
get_optionals
(flow, **kwargs)Returns the sub flow’s optional arguments merged with those passed as params in kwargs.
get_sub_runs
()Returns a list of tuples (sub flow name, sub flow run method, sub flow short name) to be used in the sub flow parameters extraction.
Workflow
dipy.workflows.combined_workflow.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
GibbsRingingFlow
dipy.workflows.denoise.
GibbsRingingFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow for applying Gibbs Ringing method. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, slice_axis=2, n_points=3, num_threads=1, out_dir='', out_unring='dwi_unrig.nii.gz')Workflow for applying Gibbs Ringing method.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Data axis corresponding to the number of acquired slices. Default is set to the third axis(2). Could be (0, 1, or 2).
Number of neighbour points to access local TV (see note). Default is set to 3.
Number of threads. Only applies to 3D or 4D data arrays. If None then all available threads will be used. Otherwise, must be a positive integer. Default is set to 1.
Output directory (default input file directory)
Name of the resulting denoised volume (default: dwi_unrig.nii.gz)
References
Neto Henriques, R., 2018. Advanced Methods for Diffusion MRI
Data Analysis and their Application to the Healthy Ageing Brain (Doctoral thesis). https://doi.org/10.17863/CAM.29356
Kellner E, Dhital B, Kiselev VG, Reisert M. Gibbs-ringing
artifact removal based on local subvoxel-shifts. Magn Reson Med. 2016 doi: 10.1002/mrm.26054.
LPCAFlow
dipy.workflows.denoise.
LPCAFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow wrapping LPCA denoising method. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, bvalues_files, bvectors_files, sigma=0, b0_threshold=50, bvecs_tol=0.01, patch_radius=2, pca_method='eig', tau_factor=2.3, out_dir='', out_denoised='dwi_lpca.nii.gz')Workflow wrapping LPCA denoising method.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.
Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.
Standard deviation of the noise estimated from the data. Default 0: it means sigma value estimation with the Manjon2013 algorithm [3].
Threshold used to find b=0 directions (default 0.0)
Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors (default 0.01)
The radius of the local patch to be taken around each voxel (in voxels). Default: 2 (denoise in blocks of 5x5x5 voxels).
Use either eigenvalue decomposition (‘eig’) or singular value decomposition (‘svd’) for principal component analysis. The default method is ‘eig’ which is faster. However, occasionally ‘svd’ might be more accurate.
Thresholding of PCA eigenvalues is done by nulling out eigenvalues that are smaller than:
tau_{factor} can be change to adjust the relationship between the noise standard deviation and the threshold tau. If tau_{factor} is set to None, it will be automatically calculated using the Marcenko-Pastur distribution [2]. Default: 2.3 (according to [1])
Output directory (default input file directory)
Name of the resulting denoised volume (default: dwi_lpca.nii.gz)
References
Veraart J, Novikov DS, Christiaens D, Ades-aron B, Sijbers,
Fieremans E, 2016. Denoising of Diffusion MRI using random matrix theory. Neuroimage 142:394-406. doi: 10.1016/j.neuroimage.2016.08.016
Veraart J, Fieremans E, Novikov DS. 2016. Diffusion MRI noise
mapping using random matrix theory. Magnetic Resonance in Medicine. doi: 10.1002/mrm.26059.
Manjon JV, Coupe P, Concha L, Buades A, Collins DL (2013)
Diffusion Weighted Image Denoising Using Overcomplete Local PCA. PLoS ONE 8(9): e73021. https://doi.org/10.1371/journal.pone.0073021
MPPCAFlow
dipy.workflows.denoise.
MPPCAFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow wrapping Marcenko-Pastur PCA denoising method. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, patch_radius=2, pca_method='eig', return_sigma=False, out_dir='', out_denoised='dwi_mppca.nii.gz', out_sigma='dwi_sigma.nii.gz')Workflow wrapping Marcenko-Pastur PCA denoising method.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
The radius of the local patch to be taken around each voxel (in voxels). Default: 2 (denoise in blocks of 5x5x5 voxels).
Use either eigenvalue decomposition (‘eig’) or singular value decomposition (‘svd’) for principal component analysis. The default method is ‘eig’ which is faster. However, occasionally ‘svd’ might be more accurate.
If true, a noise standard deviation estimate based on the Marcenko-Pastur distribution is returned [2]. Default: False.
Output directory (default input file directory)
Name of the resulting denoised volume (default: dwi_mppca.nii.gz)
Name of the resulting sigma volume (default: dwi_sigma.nii.gz)
References
Veraart J, Novikov DS, Christiaens D, Ades-aron B, Sijbers,
Fieremans E, 2016. Denoising of Diffusion MRI using random matrix theory. Neuroimage 142:394-406. doi: 10.1016/j.neuroimage.2016.08.016
Veraart J, Fieremans E, Novikov DS. 2016. Diffusion MRI noise
mapping using random matrix theory. Magnetic Resonance in Medicine. doi: 10.1002/mrm.26059.
NLMeansFlow
dipy.workflows.denoise.
NLMeansFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow wrapping the nlmeans denoising method. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, sigma=0, patch_radius=1, block_radius=5, rician=True, out_dir='', out_denoised='dwi_nlmeans.nii.gz')Workflow wrapping the nlmeans denoising method.
It applies nlmeans denoise on each file found by ‘globing’
input_files
and saves the results in a directory specified by
out_dir
.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Sigma parameter to pass to the nlmeans algorithm (default: auto estimation).
patch size is 2 x patch_radius + 1
. Default is 1.
block size is 2 x block_radius + 1
. Default is 5.
If True the noise is estimated as Rician, otherwise Gaussian noise is assumed.
Output directory (default input file directory)
Name of the resulting denoised volume (default: dwi_nlmeans.nii.gz)
References
Descoteaux, Maxime and Wiest-Daesslé, Nicolas and
Prima, Sylvain and Barillot, Christian and Deriche, Rachid. Impact of Rician Adapted Non-Local Means Filtering on HARDI, MICCAI 2008
Workflow
dipy.workflows.denoise.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.denoise.
estimate_sigma
(arr, disable_background_masking=False, N=0)Standard deviation estimation from local patches
The array to be estimated
If True, uses all voxels for the estimation, otherwise, only non-zeros voxels are used. Useful if the background is masked by the scanner.
Number of coils of the receiver array. Use N = 1 in case of a SENSE reconstruction (Philips scanners) or the number of coils for a GRAPPA reconstruction (Siemens and GE). Use 0 to disable the correction factor, as for example if the noise is Gaussian distributed. See [1] for more information.
standard deviation of the noise, one estimation per volume.
Notes
This function is the same as manually taking the standard deviation of the
background and gives one value for the whole 3D array.
It also includes the coil-dependent correction factor of Koay 2006
(see [1], equation 18) with theta = 0.
Since this function was introduced in [2] for T1 imaging,
it is expected to perform ok on diffusion MRI data, but might oversmooth
some regions and leave others un-denoised for spatially varying noise
profiles. Consider using piesno()
to estimate sigma instead if visual
inaccuracies are apparent in the denoised result.
References
Koay, C. G., & Basser, P. J. (2006). Analytically exact correction
scheme for signal extraction from noisy magnitude MR signals. Journal of Magnetic Resonance), 179(2), 317-22.
Coupe, P., Yger, P., Prima, S., Hellier, P., Kervrann, C., Barillot,
C., 2008. An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images, IEEE Trans. Med. Imaging 27, 425-41.
dipy.workflows.denoise.
gibbs_removal
(vol, slice_axis=2, n_points=3, inplace=True, num_threads=1)Suppresses Gibbs ringing artefacts of images volumes.
Matrix containing one volume (3D) or multiple (4D) volumes of images.
Data axis corresponding to the number of acquired slices. Default is set to the third axis.
Number of neighbour points to access local TV (see note). Default is set to 3.
If True, the input data is replaced with results. Otherwise, returns a new array. Default is set to True.
Number of threads. Only applies to 3D or 4D data arrays. If None then all available threads will be used. Otherwise, must be a positive integer. Default is set to 1.
Matrix containing one volume (3D) or multiple (4D) volumes of corrected images.
Notes
For 4D matrix last element should always correspond to the number of diffusion gradient directions.
References
Please cite the following articles .. [Rae70c6436165-1] Neto Henriques, R., 2018. Advanced Methods for Diffusion MRI Data
Analysis and their Application to the Healthy Ageing Brain (Doctoral thesis). https://doi.org/10.17863/CAM.29356
Kellner E, Dhital B, Kiselev VG, Reisert M. Gibbs-ringing artifact removal based on local subvoxel-shifts. Magn Reson Med. 2016 doi: 10.1002/mrm.26054.
dipy.workflows.denoise.
gradient_table
(bvals, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01, btens=None)A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.
an array of shape (N,) or (1, N) or (N, 1) with the b-values.
a path for the file which contains an array like the above (1).
an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped.
a path for the file which contains an array like the one at (3).
an array of shape (N, 3) or (3, N) with the b-vectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds (default None)
acquisition pulse duration time in seconds (default None)
All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All b-vectors need to be unit vectors up to a tolerance.
a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.
an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.
an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.
A GradientTable with all the gradient information.
Notes
Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of b-values is 7.
B-vectors should be unit vectors.
Examples
>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False
dipy.workflows.denoise.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
dipy.workflows.denoise.
localpca
(arr, sigma, mask=None, patch_radius=2, pca_method='eig', tau_factor=2.3, out_dtype=None)Performs local PCA denoising according to Manjon et al. [1].
Array of data to be denoised. The dimensions are (X, Y, Z, N), where N are the diffusion gradient directions.
Standard deviation of the noise estimated from the data.
A mask with voxels that are true inside the brain and false outside of it. The function denoises within the true part and returns zeros outside of those voxels.
The radius of the local patch to be taken around each voxel (in voxels). Default: 2 (denoise in blocks of 5x5x5 voxels).
Use either eigenvalue decomposition (eig) or singular value decomposition (svd) for principal component analysis. The default method is ‘eig’ which is faster. However, occasionally ‘svd’ might be more accurate.
Thresholding of PCA eigenvalues is done by nulling out eigenvalues that are smaller than:
tau_{factor} can be change to adjust the relationship between the noise standard deviation and the threshold tau. If tau_{factor} is set to None, it will be automatically calculated using the Marcenko-Pastur distribution [2]. Default: 2.3 (according to [1])
The dtype for the output array. Default: output has the same dtype as the input.
This is the denoised array of the same size as that of the input data, clipped to non-negative values
References
Manjon JV, Coupe P, Concha L, Buades A, Collins DL (2013) Diffusion Weighted Image Denoising Using Overcomplete Local PCA. PLoS ONE 8(9): e73021. https://doi.org/10.1371/journal.pone.0073021
Veraart J, Novikov DS, Christiaens D, Ades-aron B, Sijbers, Fieremans E, 2016. Denoising of Diffusion MRI using random matrix theory. Neuroimage 142:394-406. doi: 10.1016/j.neuroimage.2016.08.016
dipy.workflows.denoise.
mppca
(arr, mask=None, patch_radius=2, pca_method='eig', return_sigma=False, out_dtype=None)Performs PCA-based denoising using the Marcenko-Pastur distribution [1].
Array of data to be denoised. The dimensions are (X, Y, Z, N), where N are the diffusion gradient directions.
A mask with voxels that are true inside the brain and false outside of it. The function denoises within the true part and returns zeros outside of those voxels.
The radius of the local patch to be taken around each voxel (in voxels). Default: 2 (denoise in blocks of 5x5x5 voxels).
Use either eigenvalue decomposition (eig) or singular value decomposition (svd) for principal component analysis. The default method is ‘eig’ which is faster. However, occasionally ‘svd’ might be more accurate.
If true, a noise standard deviation estimate based on the Marcenko-Pastur distribution is returned [2]. Default: False.
The dtype for the output array. Default: output has the same dtype as the input.
This is the denoised array of the same size as that of the input data, clipped to non-negative values
Estimate of the spatial varying standard deviation of the noise
References
Veraart J, Novikov DS, Christiaens D, Ades-aron B, Sijbers, Fieremans E, 2016. Denoising of Diffusion MRI using random matrix theory. Neuroimage 142:394-406. doi: 10.1016/j.neuroimage.2016.08.016
Veraart J, Fieremans E, Novikov DS. 2016. Diffusion MRI noise mapping using random matrix theory. Magnetic Resonance in Medicine. doi: 10.1002/mrm.26059.
dipy.workflows.denoise.
nlmeans
(arr, sigma, mask=None, patch_radius=1, block_radius=5, rician=True, num_threads=None)Non-local means for denoising 3D and 4D images
The array to be denoised
standard deviation of the noise estimated from the data
patch size is 2 x patch_radius + 1
. Default is 1.
block size is 2 x block_radius + 1
. Default is 5.
If True the noise is estimated as Rician, otherwise Gaussian noise is assumed.
Number of threads. If None (default) then all available threads will be used (all CPU cores).
the denoised arr
which has the same shape as arr
.
References
Descoteaux, Maxime and Wiest-Daesslé, Nicolas and Prima, Sylvain and Barillot, Christian and Deriche, Rachid Impact of Rician Adapted Non-Local Means Filtering on HARDI, MICCAI 2008
dipy.workflows.denoise.
pca_noise_estimate
()PCA based local noise estimation.
the input dMRI data.
gradient information for the data gives us the bvals and bvecs of diffusion data, which is needed here to select between the noise estimation methods.
The radius of the local patch to be taken around each voxel (in voxels). Default: 1 (estimate noise in blocks of 3x3x3 voxels).
Whether to correct for bias due to Rician noise. This is an implementation of equation 8 in [1].
Radius of a Gaussian smoothing filter to apply to the noise estimate before returning. Default: 2.
The local noise standard deviation estimate.
References
Manjon JV, Coupe P, Concha L, Buades A, Collins DL “Diffusion Weighted Image Denoising Using Overcomplete Local PCA”. PLoS ONE 8(9): e73021. doi:10.1371/journal.pone.0073021.
dipy.workflows.denoise.
read_bvals_bvecs
(fbvals, fbvecs)Read b-values and b-vectors from disk.
Full path to file with b-values. None to not read bvals.
Full path of file with b-vectors. None to not read bvecs.
Notes
Files can be either ‘.bvals’/’.bvecs’ or ‘.txt’ or ‘.npy’ (containing arrays stored with the appropriate values).
dipy.workflows.denoise.
save_nifti
(fname, data, affine, hdr=None)Save a data array into a nifti file.
The full path to the file to be saved.
The array with the data to save.
The affine transform associated with the file.
May contain additional information to store in the file header.
NumpyDocString
dipy.workflows.docstring_parser.
NumpyDocString
(docstring, config={})Bases: object
Reader
dipy.workflows.docstring_parser.
Reader
(data)Bases: object
A line-based string reader.
Methods
eof |
|
is_empty |
|
peek |
|
read |
|
read_to_condition |
|
read_to_next_empty_line |
|
read_to_next_unindented_line |
|
reset |
|
seek_next_non_empty_line |
IntrospectiveArgumentParser
dipy.workflows.flow_runner.
IntrospectiveArgumentParser
(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)Bases: argparse.ArgumentParser
Methods
|
|
|
Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method. |
|
|
|
Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method. |
|
Prints a usage message incorporating the message to stderr and exits. |
|
|
|
|
|
Returns the parsed arguments as a dictionary that will be used as a workflow’s run method arguments. |
|
|
|
|
|
|
|
add_argument_group |
|
add_description |
|
add_epilogue |
|
add_mutually_exclusive_group |
|
convert_arg_line_to_args |
|
format_help |
|
get_default |
|
parse_intermixed_args |
|
parse_known_args |
|
parse_known_intermixed_args |
|
print_help |
|
show_argument |
|
update_argument |
__init__
(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)Augmenting the argument parser to allow automatic creation of arguments from workflows
The name of the program (default: sys.argv[0])
A usage message (default: auto-generated from arguments)
A description of what the program does
Text following the argument descriptions
Parsers whose arguments should be copied into this one
HelpFormatter class for printing help messages
Characters that prefix optional arguments
Characters that prefix files containing additional arguments
The default value for all arguments
String indicating how to handle conflicts
Add a -h/-help option
add_sub_flow_args
(sub_flows)Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method. Only the optional input parameters are extracted for these as they are treated as sub workflows.
Workflows to inspect.
add_workflow
(workflow)Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method. Then add these parameters to the current arparser’s own params to parse. If the workflow is of type combined_workflow, the optional input parameters of its sub workflows will also be added.
Workflow from which to infer parameters.
FetchFlow
dipy.workflows.io.
FetchFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
Gets available dataset and function names. |
|
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Load / reload an external module. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Download files to folder and check their md5 checksums. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_fetcher_datanames
()Gets available dataset and function names.
Available dataset and function names.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
load_module
(module_path)Load / reload an external module.
the path to the module relative to the main script
run
(data_names, out_dir='')Download files to folder and check their md5 checksums.
To see all available datasets, please type “list” in data_names.
Any number of Nifti1, bvals or bvecs files.
Output directory. Default: dipy home folder (~/.dipy)
IoInfoFlow
dipy.workflows.io.
IoInfoFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Provides useful information about different files used in medical imaging. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, b0_threshold=50, bvecs_tol=0.01, bshell_thr=100)Provides useful information about different files used in medical imaging. Any number of input files can be provided. The program identifies the type of file by its extension.
Any number of Nifti1, bvals or bvecs files.
(default 50)
Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors (default 0.01)
Threshold for distinguishing b-values in different shells (default 100)
SplitFlow
dipy.workflows.io.
SplitFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Splits the input 4D file and extracts the required 3D volume. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, vol_idx=0, out_dir='', out_split='split.nii.gz')Splits the input 4D file and extracts the required 3D volume.
Any number of Nifti1 files
(default 0)
Output directory. Default: dipy home folder (~/.dipy)
Name of the resulting split volume (default: split.nii.gz)
Workflow
dipy.workflows.io.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.io.
getfullargspec
(func)Get the names and default values of a callable object’s parameters.
A tuple of seven things is returned: (args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations). ‘args’ is a list of the parameter names. ‘varargs’ and ‘varkw’ are the names of the * and ** parameters or None. ‘defaults’ is an n-tuple of the default values of the last n parameters. ‘kwonlyargs’ is a list of keyword-only parameter names. ‘kwonlydefaults’ is a dictionary mapping names from kwonlyargs to defaults. ‘annotations’ is a dictionary mapping parameter names to annotations.
the “self” parameter is always reported, even for bound methods
wrapper chains defined by __wrapped__ not unwrapped automatically
dipy.workflows.io.
isfunction
(object)Return true if the object is a user-defined function.
__doc__ documentation string __name__ name with which this function was defined __code__ code object containing compiled function bytecode __defaults__ tuple of any default values for arguments __globals__ global namespace in which this function was defined __annotations__ dict of parameter annotations __kwdefaults__ dict of keyword only parameters with defaults
dipy.workflows.io.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
dipy.workflows.io.
save_nifti
(fname, data, affine, hdr=None)Save a data array into a nifti file.
The full path to the file to be saved.
The array with the data to save.
The affine transform associated with the file.
May contain additional information to store in the file header.
MaskFlow
dipy.workflows.mask.
MaskFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow for creating a binary mask |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, lb, ub=inf, out_dir='', out_mask='mask.nii.gz')Workflow for creating a binary mask
Path to image to be masked.
Lower bound value.
Upper bound value (default Inf)
Output directory (default input file directory)
Name of the masked file (default ‘mask.nii.gz’)
Workflow
dipy.workflows.mask.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.mask.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
dipy.workflows.mask.
save_nifti
(fname, data, affine, hdr=None)Save a data array into a nifti file.
The full path to the file to be saved.
The array with the data to save.
The affine transform associated with the file.
May contain additional information to store in the file header.
IOIterator
dipy.workflows.multi_io.
IOIterator
(output_strategy='absolute', mix_names=False)Bases: object
Create output filenames that work nicely with multiple input files from multiple directories (processing multiple subjects with one command)
Use information from input files, out_dir and out_fnames to generate correct outputs which can come from long lists of multiple or single inputs.
Methods
create_directories |
|
create_outputs |
|
file_existence_check |
|
set_inputs |
|
set_out_dir |
|
set_out_fnames |
|
set_output_keys |
dipy.workflows.multi_io.
connect_output_paths
(inputs, out_dir, out_files, output_strategy='absolute', mix_names=True)Generate a list of output files paths based on input files and output strategies.
List of input paths.
The output directory.
List of output files.
‘append’: Add out_dir to the path of the input. ‘prepend’: Add the input path directory tree to out_dir. ‘absolute’: Put directly in out_dir.
Whether or not prepend a string composed of a mix of the input names to the final output name.
dipy.workflows.multi_io.
glob
(pathname, *, recursive=False)Return a list of paths matching a pathname pattern.
The pattern may contain simple shell-style wildcards a la fnmatch. However, unlike fnmatch, filenames starting with a dot are special cases that are not matched by ‘*’ and ‘?’ patterns.
If recursive is true, the pattern ‘**’ will match any files and zero or more directories and subdirectories.
dipy.workflows.multi_io.
io_iterator
(inputs, out_dir, fnames, output_strategy='absolute', mix_names=False, out_keys=None)Create an IOIterator from the parameters.
List of input files.
Output directory.
File names of all outputs to be created.
Controls the behavior of the IOIterator for output paths.
Whether or not to append a mix of input names at the beginning.
dipy.workflows.multi_io.
io_iterator_
(frame, fnc, output_strategy='absolute', mix_names=False)Create an IOIterator using introspection.
Contains the info about the current local variables values.
The function to inspect
Controls the behavior of the IOIterator for output paths.
Whether or not to append a mix of input names at the beginning.
ConstrainedSphericalDeconvModel
dipy.workflows.reconst.
ConstrainedSphericalDeconvModel
(gtab, response, reg_sphere=None, sh_order=8, lambda_=1, tau=0.1, convergence=50)Bases: dipy.reconst.shm.SphHarmModel
Methods
|
Clear the cache. |
|
Retrieve a value from the cache. |
|
Store a value in the cache. |
|
Fit method for every voxel in data |
|
Compute a signal prediction given spherical harmonic coefficients for the provided GradientTable class instance. |
|
The matrix needed to sample ODFs from coefficients of the model. |
__init__
(gtab, response, reg_sphere=None, sh_order=8, lambda_=1, tau=0.1, convergence=50)Constrained Spherical Deconvolution (CSD) [1].
Spherical deconvolution computes a fiber orientation distribution (FOD), also called fiber ODF (fODF) [2], as opposed to a diffusion ODF as the QballModel or the CsaOdfModel. This results in a sharper angular profile with better angular resolution that is the best object to be used for later deterministic and probabilistic tractography [3].
A sharp fODF is obtained because a single fiber response function is injected as a priori knowledge. The response function is often data-driven and is thus provided as input to the ConstrainedSphericalDeconvModel. It will be used as deconvolution kernel, as described in [1].
A tuple with two elements. The first is the eigen-values as an (3,) ndarray and the second is the signal value for the response function without diffusion weighting (i.e. S0). This is to be able to generate a single fiber synthetic signal. The response function will be used as deconvolution kernel ([1]).
sphere used to build the regularization B matrix. Default: ‘symmetric362’.
maximal spherical harmonics order. Default: 8
weight given to the constrained-positivity regularization part of the deconvolution equation (see [1]). Default: 1
threshold controlling the amplitude below which the corresponding fODF is assumed to be zero. Ideally, tau should be set to zero. However, to improve the stability of the algorithm, tau is set to tau*100 % of the mean fODF amplitude (here, 10% by default) (see [1]). Default: 0.1
Maximum number of iterations to allow the deconvolution to converge.
References
Tournier, J.D., et al. NeuroImage 2007. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution
Descoteaux, M., et al. IEEE TMI 2009. Deterministic and Probabilistic Tractography Based on Complex Fibre Orientation Distributions
Côté, M-A., et al. Medical Image Analysis 2013. Tractometer: Towards validation of tractography pipelines
Tournier, J.D, et al. Imaging Systems and Technology 2012. MRtrix: Diffusion Tractography in Crossing Fiber Regions
predict
(sh_coeff, gtab=None, S0=1.0)Compute a signal prediction given spherical harmonic coefficients for the provided GradientTable class instance.
The spherical harmonic representation of the FOD from which to make the signal prediction.
The gradients for which the signal will be predicted. Use the model’s gradient table by default.
The non diffusion-weighted signal value.
The predicted signal.
CsaOdfModel
dipy.workflows.reconst.
CsaOdfModel
(gtab, sh_order, smooth=0.006, min_signal=1e-05, assume_normed=False)Bases: dipy.reconst.shm.QballBaseModel
Implementation of Constant Solid Angle reconstruction method.
References
Aganj, I., et al. 2009. ODF Reconstruction in Q-Ball Imaging With Solid Angle Consideration.
Methods
|
Clear the cache. |
|
Retrieve a value from the cache. |
|
Store a value in the cache. |
|
Fits the model to diffusion data and returns the model fit |
|
The matrix needed to sample ODFs from coefficients of the model. |
__init__
(gtab, sh_order, smooth=0.006, min_signal=1e-05, assume_normed=False)Creates a model that can be used to fit or sample diffusion data
Diffusion gradients used to acquire data
the spherical harmonic order of the model
The regularization parameter of the model
During fitting, all signal values less than min_signal are clipped to min_signal. This is done primarily to avoid values less than or equal to zero when taking logs.
If True, clipping and normalization of the data with respect to the mean B0 signal are skipped during mode fitting. This is an advanced feature and should be used with care.
See also
normalize_data
DiffusionKurtosisModel
dipy.workflows.reconst.
DiffusionKurtosisModel
(gtab, fit_method='WLS', *args, **kwargs)Bases: dipy.reconst.base.ReconstModel
Class for the Diffusion Kurtosis Model
Methods
|
Fit method of the DKI model class |
|
Predict a signal for this DKI model class instance given parameters. |
__init__
(gtab, fit_method='WLS', *args, **kwargs)Diffusion Kurtosis Tensor Model [1]
str can be one of the following: ‘OLS’ or ‘ULLS’ for ordinary least squares
dki.ols_fit_dki
fit_method. See dki.ols_fit_dki, dki.wls_fit_dki for details
References
Tabesh, A., Jensen, J.H., Ardekani, B.A., Helpern, J.A., 2011.
Estimation of tensors and tensor-derived measures in diffusional kurtosis imaging. Magn Reson Med. 65(3), 823-836
fit
(data, mask=None)Fit method of the DKI model class
The measured signal from one voxel.
A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[-1]
predict
(dki_params, S0=1.0)Predict a signal for this DKI model class instance given parameters.
All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows:
Three diffusion tensor’s eigenvalues
Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector
Fifteen elements of the kurtosis tensor
The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1
ReconstCSAFlow
dipy.workflows.reconst.
ReconstCSAFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Constant Solid Angle. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, bvalues_files, bvectors_files, mask_files, sh_order=6, odf_to_sh_order=8, b0_threshold=50.0, bvecs_tol=0.01, extract_pam_values=False, parallel=False, nbr_processes=None, out_dir='', out_pam='peaks.pam5', out_shm='shm.nii.gz', out_peaks_dir='peaks_dirs.nii.gz', out_peaks_values='peaks_values.nii.gz', out_peaks_indices='peaks_indices.nii.gz', out_gfa='gfa.nii.gz')Constant Solid Angle.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.
Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.
Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)
Spherical harmonics order (default 6) used in the CSA fit.
Spherical harmonics order used for peak_from_model to compress the ODF to spherical harmonics coefficients (default 8)
Threshold used to find b=0 directions
Threshold used so that norm(bvec)=1 (default 0.01)
Wheter or not to save pam volumes as single nifti files.
Whether to use parallelization in peak-finding during the calibration procedure. Default: False
If parallel is True, the number of subprocesses to use (default multiprocessing.cpu_count()).
Output directory (default input file directory)
Name of the peaks volume to be saved (default ‘peaks.pam5’)
Name of the spherical harmonics volume to be saved (default ‘shm.nii.gz’)
Name of the peaks directions volume to be saved (default ‘peaks_dirs.nii.gz’)
Name of the peaks values volume to be saved (default ‘peaks_values.nii.gz’)
Name of the peaks indices volume to be saved (default ‘peaks_indices.nii.gz’)
Name of the generalized FA volume to be saved (default ‘gfa.nii.gz’)
References
Aganj, I., et al. 2009. ODF Reconstruction in Q-Ball Imaging with Solid Angle Consideration.
ReconstCSDFlow
dipy.workflows.reconst.
ReconstCSDFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Constrained spherical deconvolution |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50.0, bvecs_tol=0.01, roi_center=None, roi_radii=10, fa_thr=0.7, frf=None, extract_pam_values=False, sh_order=8, odf_to_sh_order=8, parallel=False, nbr_processes=None, out_dir='', out_pam='peaks.pam5', out_shm='shm.nii.gz', out_peaks_dir='peaks_dirs.nii.gz', out_peaks_values='peaks_values.nii.gz', out_peaks_indices='peaks_indices.nii.gz', out_gfa='gfa.nii.gz')Constrained spherical deconvolution
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.
Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.
Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)
Threshold used to find b=0 directions
Bvecs should be unit vectors. (default:0.01)
Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape data.shape[:3] (default None)
radii of cuboid ROI in voxels (default 10)
FA threshold for calculating the response function (default 0.7)
Fiber response function can be for example inputed as 15 4 4 (from the command line) or [15, 4, 4] from a Python script to be converted to float and multiplied by 10**-4 . If None the fiber response function will be computed automatically (default: None).
Save or not to save pam volumes as single nifti files.
Spherical harmonics order (default 6) used in the CSA fit.
Spherical harmonics order used for peak_from_model to compress the ODF to spherical harmonics coefficients (default 8)
Whether to use parallelization in peak-finding during the calibration procedure. Default: False
If parallel is True, the number of subprocesses to use (default multiprocessing.cpu_count()).
Output directory (default input file directory)
Name of the peaks volume to be saved (default ‘peaks.pam5’)
Name of the spherical harmonics volume to be saved (default ‘shm.nii.gz’)
Name of the peaks directions volume to be saved (default ‘peaks_dirs.nii.gz’)
Name of the peaks values volume to be saved (default ‘peaks_values.nii.gz’)
Name of the peaks indices volume to be saved (default ‘peaks_indices.nii.gz’)
Name of the generalized FA volume to be saved (default ‘gfa.nii.gz’)
References
Tournier, J.D., et al. NeuroImage 2007. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution.
ReconstDkiFlow
dipy.workflows.reconst.
ReconstDkiFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow for Diffusion Kurtosis reconstruction and for computing DKI metrics. |
get_dki_model |
|
get_fitted_tensor |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50.0, save_metrics=[], out_dir='', out_dt_tensor='dti_tensors.nii.gz', out_fa='fa.nii.gz', out_ga='ga.nii.gz', out_rgb='rgb.nii.gz', out_md='md.nii.gz', out_ad='ad.nii.gz', out_rd='rd.nii.gz', out_mode='mode.nii.gz', out_evec='evecs.nii.gz', out_eval='evals.nii.gz', out_dk_tensor='dki_tensors.nii.gz', out_mk='mk.nii.gz', out_ak='ak.nii.gz', out_rk='rk.nii.gz')Workflow for Diffusion Kurtosis reconstruction and for computing
DKI metrics. Performs a DKI reconstruction on the files by ‘globing’
input_files
and saves the DKI metrics in a directory specified by
out_dir
.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.
Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.
Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)
Threshold used to find b=0 directions (default 0.0)
List of metrics to save. Possible values: fa, ga, rgb, md, ad, rd, mode, tensor, evec, eval (default [] (all))
Output directory (default input file directory)
Name of the tensors volume to be saved (default: ‘dti_tensors.nii.gz’)
Name of the tensors volume to be saved (default ‘dki_tensors.nii.gz’)
Name of the fractional anisotropy volume to be saved (default ‘fa.nii.gz’)
Name of the geodesic anisotropy volume to be saved (default ‘ga.nii.gz’)
Name of the color fa volume to be saved (default ‘rgb.nii.gz’)
Name of the mean diffusivity volume to be saved (default ‘md.nii.gz’)
Name of the axial diffusivity volume to be saved (default ‘ad.nii.gz’)
Name of the radial diffusivity volume to be saved (default ‘rd.nii.gz’)
Name of the mode volume to be saved (default ‘mode.nii.gz’)
Name of the eigenvectors volume to be saved (default ‘evecs.nii.gz’)
Name of the eigenvalues to be saved (default ‘evals.nii.gz’)
Name of the mean kurtosis to be saved (default: ‘mk.nii.gz’)
Name of the axial kurtosis to be saved (default: ‘ak.nii.gz’)
Name of the radial kurtosis to be saved (default: ‘rk.nii.gz’)
References
Tabesh, A., Jensen, J.H., Ardekani, B.A., Helpern, J.A., 2011. Estimation of tensors and tensor-derived measures in diffusional kurtosis imaging. Magn Reson Med. 65(3), 823-836
Jensen, Jens H., Joseph A. Helpern, Anita Ramani, Hanzhang Lu, and Kyle Kaczynski. 2005. Diffusional Kurtosis Imaging: The Quantification of Non-Gaussian Water Diffusion by Means of Magnetic Resonance Imaging. MRM 53 (6):1432-40.
ReconstDtiFlow
dipy.workflows.reconst.
ReconstDtiFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow for tensor reconstruction and for computing DTI metrics. |
get_fitted_tensor |
|
get_tensor_model |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50, bvecs_tol=0.01, save_metrics=[], out_dir='', out_tensor='tensors.nii.gz', out_fa='fa.nii.gz', out_ga='ga.nii.gz', out_rgb='rgb.nii.gz', out_md='md.nii.gz', out_ad='ad.nii.gz', out_rd='rd.nii.gz', out_mode='mode.nii.gz', out_evec='evecs.nii.gz', out_eval='evals.nii.gz', nifti_tensor=True)Workflow for tensor reconstruction and for computing DTI metrics.
using Weighted Least-Squares.
Performs a tensor reconstruction on the files by ‘globing’
input_files
and saves the DTI metrics in a directory specified by
out_dir
.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.
Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.
Path to the input masks. This path may contain wildcards to use multiple masks at once.
Threshold used to find b=0 directions (default 0.0)
Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors (default 0.01)
List of metrics to save. Possible values: fa, ga, rgb, md, ad, rd, mode, tensor, evec, eval (default [] (all))
Output directory (default input file directory)
Name of the tensors volume to be saved (default ‘tensors.nii.gz’). Per default, this will be saved following the nifti standard: with the tensor elements as Dxx, Dxy, Dyy, Dxz, Dyz, Dzz on the last (5th) dimension of the volume (shape: (i, j, k, 1, 6)). If nifti_tensor is False, this will be saved in an alternate format that is used by other software (e.g., FSL): a 4-dimensional volume (shape (i, j, k, 6)) with Dxx, Dxy, Dxz, Dyy, Dyz, Dzz on the last dimension.
Name of the fractional anisotropy volume to be saved (default ‘fa.nii.gz’)
Name of the geodesic anisotropy volume to be saved (default ‘ga.nii.gz’)
Name of the color fa volume to be saved (default ‘rgb.nii.gz’)
Name of the mean diffusivity volume to be saved (default ‘md.nii.gz’)
Name of the axial diffusivity volume to be saved (default ‘ad.nii.gz’)
Name of the radial diffusivity volume to be saved (default ‘rd.nii.gz’)
Name of the mode volume to be saved (default ‘mode.nii.gz’)
Name of the eigenvectors volume to be saved (default ‘evecs.nii.gz’)
Name of the eigenvalues to be saved (default ‘evals.nii.gz’)
Whether the tensor is saved in the standard Nifti format or in an alternate format that is used by other software (e.g., FSL): a 4-dimensional volume (shape (i, j, k, 6)) with Dxx, Dxy, Dxz, Dyy, Dyz, Dzz on the last dimension. Default: True
References
Basser, P.J., Mattiello, J., LeBihan, D., 1994. Estimation of the effective self-diffusion tensor from the NMR spin echo. J Magn Reson B 103, 247-254.
Basser, P., Pierpaoli, C., 1996. Microstructural and physiological features of tissues elucidated by quantitative diffusion-tensor MRI. Journal of Magnetic Resonance 111, 209-219.
Lin-Ching C., Jones D.K., Pierpaoli, C. 2005. RESTORE: Robust estimation of tensors by outlier rejection. MRM 53: 1088-1095
hung, SW., Lu, Y., Henry, R.G., 2006. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters. NeuroImage 33, 531-541.
ReconstIvimFlow
dipy.workflows.reconst.
ReconstIvimFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow for Intra-voxel Incoherent Motion reconstruction and for computing IVIM metrics. |
get_fitted_ivim |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, bvalues_files, bvectors_files, mask_files, split_b_D=400, split_b_S0=200, b0_threshold=0, save_metrics=[], out_dir='', out_S0_predicted='S0_predicted.nii.gz', out_perfusion_fraction='perfusion_fraction.nii.gz', out_D_star='D_star.nii.gz', out_D='D.nii.gz')Workflow for Intra-voxel Incoherent Motion reconstruction and for
computing IVIM metrics. Performs a IVIM reconstruction on the files
by ‘globing’ input_files
and saves the IVIM metrics in a directory
specified by out_dir
.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.
Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.
Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)
Value to split the bvals to estimate D for the two-stage process of fitting (default 400)
Value to split the bvals to estimate S0 for the two-stage process of fitting. (default 200)
Threshold value for the b0 bval. (default 0)
List of metrics to save. Possible values: S0_predicted, perfusion_fraction, D_star, D (default [] (all))
Output directory (default input file directory)
Name of the S0 signal estimated to be saved (default: ‘S0_predicted.nii.gz’)
Name of the estimated volume fractions to be saved (default ‘perfusion_fraction.nii.gz’)
Name of the estimated pseudo-diffusion parameter to be saved (default ‘D_star.nii.gz’)
Name of the estimated diffusion parameter to be saved (default ‘D.nii.gz’)
References
Stejskal, E. O.; Tanner, J. E. (1 January 1965). “Spin Diffusion Measurements: Spin Echoes in the Presence of a Time-Dependent Field Gradient”. The Journal of Chemical Physics 42 (1): 288. Bibcode: 1965JChPh..42..288S. doi:10.1063/1.1695690.
Le Bihan, Denis, et al. “Separation of diffusion and perfusion in intravoxel incoherent motion MR imaging.” Radiology 168.2 (1988): 497-505.
ReconstMAPMRIFlow
dipy.workflows.reconst.
ReconstMAPMRIFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow for fitting the MAPMRI model (with optional Laplacian regularization). |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(data_files, bvals_files, bvecs_files, small_delta, big_delta, b0_threshold=50.0, laplacian=True, positivity=True, bval_threshold=2000, save_metrics=[], laplacian_weighting=0.05, radial_order=6, out_dir='', out_rtop='rtop.nii.gz', out_lapnorm='lapnorm.nii.gz', out_msd='msd.nii.gz', out_qiv='qiv.nii.gz', out_rtap='rtap.nii.gz', out_rtpp='rtpp.nii.gz', out_ng='ng.nii.gz', out_perng='perng.nii.gz', out_parng='parng.nii.gz')Workflow for fitting the MAPMRI model (with optional Laplacian regularization). Generates rtop, lapnorm, msd, qiv, rtap, rtpp, non-gaussian (ng), parallel ng, perpendicular ng saved in a nifti format in input files provided by data_files and saves the nifti files to an output directory specified by out_dir.
In order for the MAPMRI workflow to work in the way intended either the Laplacian or positivity or both must be set to True.
Path to the input volume.
Path to the bval files.
Path to the bvec files.
Small delta value used in generation of gradient table of provided bval and bvec.
Big delta value used in generation of gradient table of provided bval and bvec.
Threshold used to find b=0 directions (default 0.0)
Regularize using the Laplacian of the MAP-MRI basis (default True)
Constrain the propagator to be positive. (default True)
Sets the b-value threshold to be used in the scale factor estimation. In order for the estimated non-Gaussianity to have meaning this value should set to a lower value (b<2000 s/mm^2) such that the scale factors are estimated on signal points that reasonably represent the spins at Gaussian diffusion. (default: 2000)
List of metrics to save. Possible values: rtop, laplacian_signal, msd, qiv, rtap, rtpp, ng, perng, parng (default: [] (all))
Weighting value used in fitting the MAPMRI model in the Laplacian and both model types. (default: 0.05)
Even value used to set the order of the basis (default: 6)
Output directory (default: input file directory)
Name of the rtop to be saved
Name of the norm of Laplacian signal to be saved
Name of the msd to be saved
Name of the qiv to be saved
Name of the rtap to be saved
Name of the rtpp to be saved
Name of the Non-Gaussianity to be saved
Name of the Non-Gaussianity perpendicular to be saved
Name of the Non-Gaussianity parallel to be saved
TensorModel
dipy.workflows.reconst.
TensorModel
(gtab, fit_method='WLS', return_S0_hat=False, *args, **kwargs)Bases: dipy.reconst.base.ReconstModel
Diffusion Tensor
Methods
|
Fit method of the DTI model class |
|
Predict a signal for this TensorModel class instance given parameters. |
__init__
(gtab, fit_method='WLS', return_S0_hat=False, *args, **kwargs)A Diffusion Tensor Model [1], [2].
str can be one of the following:
dti.wls_fit_tensor()
dti.ols_fit_tensor()
dti.nlls_fit_tensor()
fitting [3]
dti.restore_fit_tensor()
Boolean to return (True) or not (False) the S0 values for the fit.
fit_method. See dti.wls_fit_tensor, dti.ols_fit_tensor for details
The minimum signal value. Needs to be a strictly positive number. Default: minimal signal in the data provided to fit.
Notes
In order to increase speed of processing, tensor fitting is done simultaneously over many voxels. Many fit_methods use the ‘step’ parameter to set the number of voxels that will be fit at once in each iteration. This is the chunk size as a number of voxels. A larger step value should speed things up, but it will also take up more memory. It is advisable to keep an eye on memory consumption as this value is increased.
E.g., in iter_fit_tensor()
we have a default step value of
1e4
References
Basser, P.J., Mattiello, J., LeBihan, D., 1994. Estimation of the effective self-diffusion tensor from the NMR spin echo. J Magn Reson B 103, 247-254.
Basser, P., Pierpaoli, C., 1996. Microstructural and physiological features of tissues elucidated by quantitative diffusion-tensor MRI. Journal of Magnetic Resonance 111, 209-219.
Lin-Ching C., Jones D.K., Pierpaoli, C. 2005. RESTORE: Robust estimation of tensors by outlier rejection. MRM 53: 1088-1095
fit
(data, mask=None)Fit method of the DTI model class
The measured signal from one voxel.
A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1]
predict
(dti_params, S0=1.0)Predict a signal for this TensorModel class instance given parameters.
The last dimension should have 12 tensor parameters: 3 eigenvalues, followed by the 3 eigenvectors
The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1
Workflow
dipy.workflows.reconst.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.reconst.
IvimModel
(gtab, fit_method='trr', **kwargs)Selector function to switch between the 2-stage Trust-Region Reflective based NLLS fitting method (also containing the linear fit): trr and the Variable Projections based fitting method: varpro.
The value fit_method can either be ‘trr’ or ‘varpro’. default : trr
dipy.workflows.reconst.
auto_response_ssst
(gtab, data, roi_center=None, roi_radii=10, fa_thr=0.7)function using FA.
diffusion data
Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape data.shape[:3].
radii of cuboid ROI
FA threshold
(evals, S0)
The ratio between smallest versus largest eigenvalue of the response.
Notes
In CSD there is an important pre-processing step: the estimation of the fiber response function. In order to do this, we look for voxels with very anisotropic configurations. We get this information from csdeconv.mask_for_response_ssst(), which returns a mask of selected voxels (more details are available in the description of the function).
With the mask, we compute the response function by using csdeconv.response_from_mask_ssst(), which returns the response and the ratio (more details are available in the description of the function).
dipy.workflows.reconst.
axial_diffusivity
(evals, axis=-1)Axial Diffusivity (AD) of a diffusion tensor. Also called parallel diffusivity.
Eigenvalues of a diffusion tensor, must be sorted in descending order along axis.
Axis of evals which contains 3 eigenvalues.
Calculated AD.
Notes
AD is calculated with the following equation:
dipy.workflows.reconst.
color_fa
(fa, evecs)Color fractional anisotropy of diffusion tensor
Array of the fractional anisotropy (can be 1D, 2D or 3D)
eigen vectors from the tensor model
Colormap of the FA with red for the x value, y for the green value and z for the blue value.
Notes
It is computed from the clipped FA between 0 and 1 using the following formula
dipy.workflows.reconst.
fractional_anisotropy
(evals, axis=-1)Return Fractional anisotropy (FA) of a diffusion tensor.
Eigenvalues of a diffusion tensor.
Axis of evals which contains 3 eigenvalues.
Calculated FA. Range is 0 <= FA <= 1.
Notes
FA is calculated using the following equation:
dipy.workflows.reconst.
geodesic_anisotropy
(evals, axis=-1)Geodesic anisotropy (GA) of a diffusion tensor.
Eigenvalues of a diffusion tensor.
Axis of evals which contains 3 eigenvalues.
Calculated GA. In the range 0 to +infinity
Notes
GA is calculated using the following equation given in [1]:
Note that the notation, \(<D>\), is often used as the mean diffusivity (MD) of the diffusion tensor and can lead to confusions in the literature (see [1] versus [2] versus [3] for example). Reference [2] defines geodesic anisotropy (GA) with \(<D>\) as the MD in the denominator of the sum. This is wrong. The original paper [1] defines GA with \(<D> = det(D)^{1/3}\), as the isotropic part of the distance. This might be an explanation for the confusion. The isotropic part of the diffusion tensor in Euclidean space is the MD whereas the isotropic part of the tensor in log-Euclidean space is \(det(D)^{1/3}\). The Appendix of [1] and log-Euclidean derivations from [3] are clear on this. Hence, all that to say that \(<D> = det(D)^{1/3}\) here for the GA definition and not MD.
References
P. G. Batchelor, M. Moakher, D. Atkinson, F. Calamante, A. Connelly, “A rigorous framework for diffusion tensor calculus”, Magnetic Resonance in Medicine, vol. 53, pp. 221-225, 2005.
M. M. Correia, V. F. Newcombe, G.B. Williams. “Contrast-to-noise ratios for indices of anisotropy obtained from diffusion MRI: a study with standard clinical b-values at 3T”. NeuroImage, vol. 57, pp. 1103-1115, 2011.
A. D. Lee, etal, P. M. Thompson. “Comparison of fractional and geodesic anisotropy in diffusion tensor images of 90 monozygotic and dizygotic twins”. 5th IEEE International Symposium on Biomedical Imaging (ISBI), pp. 943-946, May 2008.
V. Arsigny, P. Fillard, X. Pennec, N. Ayache. “Log-Euclidean metrics for fast and simple calculus on diffusion tensors.” Magnetic Resonance in Medecine, vol 56, pp. 411-421, 2006.
dipy.workflows.reconst.
get_mode
(q_form)Mode (MO) of a diffusion tensor [1].
The quadratic form of a tensor, or an array with quadratic forms of tensors. Should be of shape (x, y, z, 3, 3) or (n, 3, 3) or (3, 3).
Calculated tensor mode in each spatial coordinate.
Notes
Mode ranges between -1 (planar anisotropy) and +1 (linear anisotropy) with 0 representing orthotropy. Mode is calculated with the following equation (equation 9 in [1]):
Where \(\widetilde{A}\) is the deviatoric part of the tensor quadratic form.
References
dipy.workflows.reconst.
gradient_table
(bvals, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01, btens=None)A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.
an array of shape (N,) or (1, N) or (N, 1) with the b-values.
a path for the file which contains an array like the above (1).
an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped.
a path for the file which contains an array like the one at (3).
an array of shape (N, 3) or (3, N) with the b-vectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds (default None)
acquisition pulse duration time in seconds (default None)
All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All b-vectors need to be unit vectors up to a tolerance.
a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.
an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.
an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.
A GradientTable with all the gradient information.
Notes
Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of b-values is 7.
B-vectors should be unit vectors.
Examples
>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False
dipy.workflows.reconst.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
dipy.workflows.reconst.
load_nifti_data
(fname, as_ndarray=True)Load only the data array from a nifti file.
Full path to the file.
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
dipy.workflows.reconst.
lower_triangular
(tensor, b0=None)Returns the six lower triangular values of the tensor and a dummy variable if b0 is not None
a collection of 3, 3 diffusion tensors
if b0 is not none log(b0) is returned as the dummy variable
If b0 is none, then the shape will be (…, 6) otherwise (…, 7)
dipy.workflows.reconst.
mean_diffusivity
(evals, axis=-1)Mean Diffusivity (MD) of a diffusion tensor.
Eigenvalues of a diffusion tensor.
Axis of evals which contains 3 eigenvalues.
Calculated MD.
Notes
MD is calculated with the following equation:
dipy.workflows.reconst.
nifti1_symmat
(image_data, *args, **kwargs)Returns a Nifti1Image with a symmetric matrix intent
should have lower triangular elements of a symmetric matrix along the last dimension
5d, extra dimensions addes before the last. Has symmetric matrix intent code
dipy.workflows.reconst.
peaks_from_model
(model, data, sphere, relative_peak_threshold, min_separation_angle, mask=None, return_odf=False, return_sh=True, gfa_thr=0, normalize_peaks=False, sh_order=8, sh_basis_type=None, npeaks=5, B=None, invB=None, parallel=False, nbr_processes=None)Fit the model to data and computes peaks and metrics
model will be used to fit the data.
The Sphere providing discrete directions for evaluation.
Only return peaks greater than relative_peak_threshold * m
where m
is the largest peak.
directions. If two peaks are too close only the larger of the two is returned.
If mask is provided, voxels that are False in mask are skipped and no peaks are returned.
If True, the odfs are returned.
If True, the odf as spherical harmonics coefficients is returned
Voxels with gfa less than gfa_thr are skipped, no peaks are returned.
If true, all peak values are calculated relative to max(odf).
Maximum SH order in the SH fit. For sh_order, there will be
(sh_order + 1) * (sh_order + 2) / 2
SH coefficients (default 8).
None
for the default DIPY basis,
tournier07
for the Tournier 2007 [2] basis, and
descoteaux07
for the Descoteaux 2007 [1] basis
(None
defaults to descoteaux07
).
Lambda-regularization in the SH fit (default 0.0).
Maximum number of peaks found (default 5 peaks).
Matrix that transforms spherical harmonics to spherical function
sf = np.dot(sh, B)
.
Inverse of B.
If True, use multiprocessing to compute peaks and metric
(default False). Temporary files are saved in the default temporary
directory of the system. It can be changed using import tempfile
and tempfile.tempdir = '/path/to/tempdir'
.
If parallel is True, the number of subprocesses to use (default multiprocessing.cpu_count()).
An object with gfa
, peak_directions
, peak_values
,
peak_indices
, odf
, shm_coeffs
as attributes
References
Descoteaux, M., Angelino, E., Fitzgibbons, S. and Deriche, R. Regularized, Fast, and Robust Analytical Q-ball Imaging. Magn. Reson. Med. 2007;58:497-510.
Tournier J.D., Calamante F. and Connelly A. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution. NeuroImage. 2007;35(4):1459-1472.
dipy.workflows.reconst.
radial_diffusivity
(evals, axis=-1)Radial Diffusivity (RD) of a diffusion tensor. Also called perpendicular diffusivity.
Eigenvalues of a diffusion tensor, must be sorted in descending order along axis.
Axis of evals which contains 3 eigenvalues.
Calculated RD.
Notes
RD is calculated with the following equation:
dipy.workflows.reconst.
read_bvals_bvecs
(fbvals, fbvecs)Read b-values and b-vectors from disk.
Full path to file with b-values. None to not read bvals.
Full path of file with b-vectors. None to not read bvecs.
Notes
Files can be either ‘.bvals’/’.bvecs’ or ‘.txt’ or ‘.npy’ (containing arrays stored with the appropriate values).
dipy.workflows.reconst.
save_nifti
(fname, data, affine, hdr=None)Save a data array into a nifti file.
The full path to the file to be saved.
The array with the data to save.
The affine transform associated with the file.
May contain additional information to store in the file header.
dipy.workflows.reconst.
save_peaks
(fname, pam, affine=None, verbose=False)Save all important attributes of object PeaksAndMetrics in a PAM5 file (HDF5).
Filename of PAM5 file
Object holding peak_dirs, shm_coeffs and other attributes
The 4x4 matrix transforming the date from native to world coordinates. PeaksAndMetrics should have that attribute but if not it can be provided here. Default None.
Print summary information about the saved file.
dipy.workflows.reconst.
split_dki_param
(dki_params)Extract the diffusion tensor eigenvalues, the diffusion tensor eigenvector matrix, and the 15 independent elements of the kurtosis tensor from the model parameters estimated from the DKI model
All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows:
Three diffusion tensor’s eigenvalues
Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector
Fifteen elements of the kurtosis tensor
Eigenvalues from eigen decomposition of the tensor.
Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. eigvecs[:,j] is associated with eigvals[j])
Fifteen elements of the kurtosis tensor
LabelsBundlesFlow
dipy.workflows.segment.
LabelsBundlesFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Extract bundles using existing indices (labels) |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(streamline_files, labels_files, out_dir='', out_bundle='recognized_orig.trk')Extract bundles using existing indices (labels)
The path of streamline files where you want to recognize bundles
The path of model bundle files
Output directory (default input file directory)
Recognized bundle in the space of the model bundle (default ‘recognized_orig.trk’)
References
Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
MedianOtsuFlow
dipy.workflows.segment.
MedianOtsuFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow wrapping the median_otsu segmentation method. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, save_masked=False, median_radius=2, numpass=5, autocrop=False, vol_idx=None, dilate=None, out_dir='', out_mask='brain_mask.nii.gz', out_masked='dwi_masked.nii.gz')Workflow wrapping the median_otsu segmentation method.
Applies median_otsu segmentation on each file found by ‘globing’
input_files
and saves the results in a directory specified by
out_dir
.
Path to the input volumes. This path may contain wildcards to process multiple inputs at once.
Save mask
Radius (in voxels) of the applied median filter (default 2)
Number of pass of the median filter (default 5)
If True, the masked input_volumes will also be cropped using the bounding box defined by the masked data. For example, if diffusion images are of 1x1x1 (mm^3) or higher resolution auto-cropping could reduce their size in memory and speed up some of the analysis. (default False)
1D array representing indices of axis=-1
of a 4D
input_volume. From the command line use something like
3 4 5 6. From script use something like [3, 4, 5, 6]. This
input is required for 4D volumes.
number of iterations for binary dilation (default ‘None’)
Output directory (default input file directory)
Name of the mask volume to be saved (default ‘brain_mask.nii.gz’)
Name of the masked volume to be saved (default ‘dwi_masked.nii.gz’)
RecoBundles
dipy.workflows.segment.
RecoBundles
(streamlines, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=False)Bases: object
Methods
|
Compare the similiarity between two given bundles, model bundle, and extracted bundle. |
|
Recognize the model_bundle in self.streamlines |
|
Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input. |
__init__
(streamlines, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=False)Recognition of bundles
Extract bundles from a participants’ tractograms using model bundles segmented from a different subject or an atlas of bundles. See [Garyfallidis17] for the details.
The tractogram in which you want to recognize bundles.
Keep streamlines that have length greater than this value (default 50)
Keep streamlines have length less than this value (default 1000000)
Provide existing clustering to start RB faster (default None).
Distance threshold in mm for clustering streamlines. Default: 15.
Number of points per streamline (default 20)
If None define RandomState in initialization function. Default: None
If True, log information.
Notes
Make sure that before creating this class that the streamlines and the model bundles are roughly in the same space. Also default thresholds are assumed in RAS 1mm^3 space. You may want to adjust those if your streamlines are not in world coordinates.
References
Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
evaluate_results
(model_bundle, pruned_streamlines, slr_select)Compare the similiarity between two given bundles, model bundle, and extracted bundle.
Select the number of streamlines from model to neirborhood of model to perform the local SLR.
bundle adjacency value between model bundle and pruned bundle
bundle minimum distance value between model bundle and pruned bundle
recognize
(model_bundle, model_clust_thr, reduction_thr=10, reduction_distance='mdf', slr=True, slr_num_threads=None, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method='L-BFGS-B', pruning_thr=5, pruning_distance='mdf')Recognize the model_bundle in self.streamlines
mdf or mam (default mdf)
Use Streamline-based Linear Registration (SLR) locally (default True)
(default None)
(default None)
Select the number of streamlines from model to neirborhood of model to perform the local SLR.
Optimization method (default ‘L-BFGS-B’)
MDF (‘mdf’) and MAM (‘mam’)
Recognized bundle in the space of the model tractogram
Indices of recognized bundle in the original tractogram
References
Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
refine
(model_bundle, pruned_streamlines, model_clust_thr, reduction_thr=14, reduction_distance='mdf', slr=True, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method='L-BFGS-B', pruning_thr=6, pruning_distance='mdf')Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input. It refines the first ouput of recobundle by applying second local slr (optional), and second pruning. This method is useful when we are dealing with noisy data or when we want to extract small tracks from tractograms.
mdf or mam (default mam)
Use Streamline-based Linear Registration (SLR) locally (default True)
(default None)
(default None)
Select the number of streamlines from model to neirborhood of model to perform the local SLR.
Optimization method (default ‘L-BFGS-B’)
MDF (‘mdf’) and MAM (‘mam’)
Recognized bundle in the space of the model tractogram
Indices of recognized bundle in the original tractogram
References
Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
Chandio, B.Q., Risacher, S.L., Pestilli, F.,
Bullock, D., Yeh, FC., Koudoro, S., Rokem, A., Harezlack, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)
RecoBundlesFlow
dipy.workflows.segment.
RecoBundlesFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Recognize bundles |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(streamline_files, model_bundle_files, greater_than=50, less_than=1000000, no_slr=False, clust_thr=15.0, reduction_thr=15.0, reduction_distance='mdf', model_clust_thr=2.5, pruning_thr=8.0, pruning_distance='mdf', slr_metric='symmetric', slr_transform='similarity', slr_matrix='small', refine=False, r_reduction_thr=12.0, r_pruning_thr=6.0, no_r_slr=False, out_dir='', out_recognized_transf='recognized.trk', out_recognized_labels='labels.npy')Recognize bundles
The path of streamline files where you want to recognize bundles
The path of model bundle files
Keep streamlines that have length greater than this value (default 50) in mm.
Keep streamlines have length less than this value (default 1000000) in mm.
Don’t enable local Streamline-based Linear Registration (default False).
MDF distance threshold for all streamlines (default 15)
Reduce search space by (mm) (default 15)
Reduction distance type can be mdf or mam (default mdf)
MDF distance threshold for the model bundles (default 2.5)
Pruning after matching (default 8).
Pruning distance type can be mdf or mam (default mdf)
Options are None, symmetric, asymmetric or diagonal (default symmetric).
Transformation allowed. translation, rigid, similarity or scaling (Default ‘similarity’).
Options are ‘nano’, ‘tiny’, ‘small’, ‘medium’, ‘large’, ‘huge’ (default ‘small’)
Enable refine recognized bunle (default False)
Refine reduce search space by (mm) (default 12)
Refine pruning after matching (default 6).
Don’t enable Refine local Streamline-based Linear Registration (default False).
Output directory (default input file directory)
Recognized bundle in the space of the model bundle (default ‘recognized.trk’)
Indices of recognized bundle in the original tractogram (default ‘labels.npy’)
References
Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
Chandio, B.Q., Risacher, S.L., Pestilli, F.,
Bullock, D., Yeh, FC., Koudoro, S., Rokem, A., Harezlack, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)
Space
dipy.workflows.segment.
Space
Bases: enum.Enum
Enum to simplify future change to convention
StatefulTractogram
dipy.workflows.segment.
StatefulTractogram
(streamlines, reference, space, origin=<Origin.NIFTI: 'center'>, data_per_point=None, data_per_streamline=None)Bases: object
Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). Facilitate transformation between space and data manipulation for each streamline / point.
affine
Getter for the reference affine
data_per_point
Getter for data_per_point
data_per_streamline
Getter for data_per_streamline
dimensions
Getter for the reference dimensions
origin
Getter for origin standard
space
Getter for the current space
space_attributes
Getter for spatial attribute
streamlines
Partially safe getter for streamlines
voxel_order
Getter for the reference voxel order
voxel_sizes
Getter for the reference voxel sizes
Methods
|
Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency |
Compute the bounding box of the streamlines in their current state |
|
|
Create an instance of StatefulTractogram from another instance of StatefulTractogram. |
Return a list of the data_per_point attribute names |
|
Return a list of the data_per_streamline attribute names |
|
Safe getter for streamlines (for slicing) |
|
Verify that the bounding box is valid in voxel space. |
|
|
Remove streamlines with invalid coordinates from the object. |
Safe function to shift streamlines so the center of voxel is the origin |
|
Safe function to shift streamlines so the corner of voxel is the origin |
|
|
Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner) |
|
Safe function to transform streamlines and update state |
|
Safe function to transform streamlines to a particular space using an enum and update state |
|
Safe function to transform streamlines and update state |
|
Safe function to transform streamlines and update state |
__init__
(streamlines, reference, space, origin=<Origin.NIFTI: 'center'>, data_per_point=None, data_per_streamline=None)Create a strict, state-aware, robust tractogram
Streamlines of the tractogram
Nifti1Header, trk.header (dict) or another Stateful Tractogram Reference that provides the spatial attributes. Typically a nifti-related object from the native diffusion used for streamlines generation
Current space in which the streamlines are (vox, voxmm or rasmm) After tracking the space is VOX, after loading with nibabel the space is RASMM
Current origin in which the streamlines are (center or corner) After loading with nibabel the origin is CENTER
Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i
Dictionary in which each key has X items X being the number of streamlines
Notes
Very important to respect the convention, verify that streamlines match the reference and are effectively in the right space.
Any change to the number of streamlines, data_per_point or data_per_streamline requires particular verification.
In a case of manipulation not allowed by this object, use Nibabel directly and be careful.
are_compatible
(sft_1, sft_2)Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency
compute_bounding_box
()Compute the bounding box of the streamlines in their current state
8 corners of the XYZ aligned box, all zeros if no streamlines
from_sft
(streamlines, sft, data_per_point=None, data_per_streamline=None)Create an instance of StatefulTractogram from another instance of StatefulTractogram.
Streamlines of the tractogram
The other StatefulTractgram to copy the space_attribute AND state from.
Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i
Dictionary in which each key has X items X being the number of streamlines
is_bbox_in_vox_valid
()Verify that the bounding box is valid in voxel space. Negative coordinates or coordinates above the volume dimensions are considered invalid in voxel space.
Are the streamlines within the volume of the associated reference
remove_invalid_streamlines
(epsilon=0.001)Remove streamlines with invalid coordinates from the object. Will also remove the data_per_point and data_per_streamline. Invalid coordinates are any X,Y,Z values above the reference dimensions or below zero
Epsilon value for the bounding box verification. Default is 1e-6.
Tuple of two list, indices_to_remove, indices_to_keep
to_origin
(target_origin)Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner)
Workflow
dipy.workflows.segment.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.segment.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
dipy.workflows.segment.
load_tractogram
(filename, reference, to_space=<Space.RASMM: 'rasmm'>, to_origin=<Origin.NIFTI: 'center'>, bbox_valid_check=True, trk_header_check=True)Load the stateful tractogram from any format (trk, tck, vtk, fib, dpy)
Filename with valid extension
trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation
Space to which the streamlines will be transformed after loading
NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)
Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.
Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded
The tractogram to load (must have been saved properly)
dipy.workflows.segment.
median_otsu
(input_volume, vol_idx=None, median_radius=4, numpass=4, autocrop=False, dilate=None)Simple brain extraction tool method for images from DWI data.
It uses a median filter smoothing of the input_volumes vol_idx and an automatic histogram Otsu thresholding technique, hence the name median_otsu.
This function is inspired from Mrtrix’s bet which has default values
median_radius=3
, numpass=2
. However, from tests on multiple 1.5T
and 3T data from GE, Philips, Siemens, the most robust choice is
median_radius=4
, numpass=4
.
3D or 4D array of the brain volume.
1D array representing indices of axis=3
of a 4D input_volume.
None is only an acceptable input if input_volume
is 3D.
Radius (in voxels) of the applied median filter (default: 4).
Number of pass of the median filter (default: 4).
if True, the masked input_volume will also be cropped using the bounding box defined by the masked data. Should be on if DWI is upsampled to 1x1x1 resolution. (default: False).
number of iterations for binary dilation
Masked input_volume
The binary brain mask
Notes
Copyright (C) 2011, the scikit-image team All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of skimage nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
dipy.workflows.segment.
save_nifti
(fname, data, affine, hdr=None)Save a data array into a nifti file.
The full path to the file to be saved.
The array with the data to save.
The affine transform associated with the file.
May contain additional information to store in the file header.
dipy.workflows.segment.
save_tractogram
(sft, filename, bbox_valid_check=True)Save the stateful tractogram in any format (trk, tck, vtk, fib, dpy)
The stateful tractogram to save
Filename with valid extension
Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.
True if the saving operation was successful
BundleAnalysisTractometryFlow
dipy.workflows.stats.
BundleAnalysisTractometryFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow of bundle analytics. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(model_bundle_folder, subject_folder, no_disks=100, out_dir='')Workflow of bundle analytics.
Applies statistical analysis on bundles of subjects and saves the
results in a directory specified by out_dir
.
Path to the input model bundle files. This path may contain wildcards to process multiple inputs at once.
Path to the input subject folder. This path may contain wildcards to process multiple inputs at once.
Number of disks used for dividing bundle into disks. (Default 100)
Output directory (default input file directory)
References
Chandio, B.Q., Risacher, S.L., Pestilli, F.,
Bullock, D., Yeh, FC., Koudoro, S., Rokem, A., Harezlack, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)
BundleShapeAnalysis
dipy.workflows.stats.
BundleShapeAnalysis
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow of bundle analytics. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(subject_folder, clust_thr=[5, 3, 1.5], threshold=6, out_dir='')Workflow of bundle analytics.
Applies bundle shape similarity analysis on bundles of subjects and
saves the results in a directory specified by out_dir
.
Path to the input subject folder. This path may contain wildcards to process multiple inputs at once.
list of bundle clustering thresholds used in quickbundlesX
Bundle shape similarity threshold.
Output directory (default input file directory)
References
Chandio, B.Q., Risacher, S.L., Pestilli, F.,
Bullock, D., Yeh, FC., Koudoro, S., Rokem, A., Harezlack, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)
LinearMixedModelsFlow
dipy.workflows.stats.
LinearMixedModelsFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
|
Splits the path string and returns name of anatomical measure (eg: fa), bundle name eg(AF_L) and bundle name with metric name (eg: AF_L_fa) |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow of linear Mixed Models. |
|
Saves LMM plot with segment/disk number on x-axis and -log10(pvalues) on y-axis in out_dir folder. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_metric_name
(path)Splits the path string and returns name of anatomical measure (eg: fa), bundle name eg(AF_L) and bundle name with metric name (eg: AF_L_fa)
Path to the input metric files. This path may contain wildcards to process multiple inputs at once.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(h5_files, no_disks=100, out_dir='')Workflow of linear Mixed Models.
Applies linear Mixed Models on bundles of subjects and saves the
results in a directory specified by out_dir
.
Path to the input metric files. This path may contain wildcards to process multiple inputs at once.
Number of disks used for dividing bundle into disks. (Default 100)
Output directory (default input file directory)
save_lmm_plot
(plot_file, title, bundle_name, x, y)Saves LMM plot with segment/disk number on x-axis and -log10(pvalues) on y-axis in out_dir folder.
Path to the plot file. This path may contain wildcards to process multiple inputs at once.
Title for the plot
list containing segment/disk number for x-axis
list containing -log10(pvalues) per segment/disk number for y-axis
SNRinCCFlow
dipy.workflows.stats.
SNRinCCFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Compute the signal-to-noise ratio in the corpus callosum. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(data_files, bvals_files, bvecs_files, mask_file, bbox_threshold=[0.6, 1, 0, 0.1, 0, 0.1], out_dir='', out_file='product.json', out_mask_cc='cc.nii.gz', out_mask_noise='mask_noise.nii.gz')Compute the signal-to-noise ratio in the corpus callosum.
Path to the dwi.nii.gz file. This path may contain wildcards to process multiple inputs at once.
Path of bvals.
Path of bvecs.
Path of a brain mask file.
Threshold for bounding box, values separated with commas for ex. [0.6,1,0,0.1,0,0.1]. (default (0.6, 1, 0, 0.1, 0, 0.1))
Where the resulting file will be saved. (default ‘’)
Name of the result file to be saved. (default ‘product.json’)
Name of the CC mask volume to be saved (default ‘cc.nii.gz’)
Name of the mask noise volume to be saved (default ‘mask_noise.nii.gz’)
Space
dipy.workflows.stats.
Space
Bases: enum.Enum
Enum to simplify future change to convention
StatefulTractogram
dipy.workflows.stats.
StatefulTractogram
(streamlines, reference, space, origin=<Origin.NIFTI: 'center'>, data_per_point=None, data_per_streamline=None)Bases: object
Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). Facilitate transformation between space and data manipulation for each streamline / point.
affine
Getter for the reference affine
data_per_point
Getter for data_per_point
data_per_streamline
Getter for data_per_streamline
dimensions
Getter for the reference dimensions
origin
Getter for origin standard
space
Getter for the current space
space_attributes
Getter for spatial attribute
streamlines
Partially safe getter for streamlines
voxel_order
Getter for the reference voxel order
voxel_sizes
Getter for the reference voxel sizes
Methods
|
Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency |
Compute the bounding box of the streamlines in their current state |
|
|
Create an instance of StatefulTractogram from another instance of StatefulTractogram. |
Return a list of the data_per_point attribute names |
|
Return a list of the data_per_streamline attribute names |
|
Safe getter for streamlines (for slicing) |
|
Verify that the bounding box is valid in voxel space. |
|
|
Remove streamlines with invalid coordinates from the object. |
Safe function to shift streamlines so the center of voxel is the origin |
|
Safe function to shift streamlines so the corner of voxel is the origin |
|
|
Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner) |
|
Safe function to transform streamlines and update state |
|
Safe function to transform streamlines to a particular space using an enum and update state |
|
Safe function to transform streamlines and update state |
|
Safe function to transform streamlines and update state |
__init__
(streamlines, reference, space, origin=<Origin.NIFTI: 'center'>, data_per_point=None, data_per_streamline=None)Create a strict, state-aware, robust tractogram
Streamlines of the tractogram
Nifti1Header, trk.header (dict) or another Stateful Tractogram Reference that provides the spatial attributes. Typically a nifti-related object from the native diffusion used for streamlines generation
Current space in which the streamlines are (vox, voxmm or rasmm) After tracking the space is VOX, after loading with nibabel the space is RASMM
Current origin in which the streamlines are (center or corner) After loading with nibabel the origin is CENTER
Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i
Dictionary in which each key has X items X being the number of streamlines
Notes
Very important to respect the convention, verify that streamlines match the reference and are effectively in the right space.
Any change to the number of streamlines, data_per_point or data_per_streamline requires particular verification.
In a case of manipulation not allowed by this object, use Nibabel directly and be careful.
are_compatible
(sft_1, sft_2)Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency
compute_bounding_box
()Compute the bounding box of the streamlines in their current state
8 corners of the XYZ aligned box, all zeros if no streamlines
from_sft
(streamlines, sft, data_per_point=None, data_per_streamline=None)Create an instance of StatefulTractogram from another instance of StatefulTractogram.
Streamlines of the tractogram
The other StatefulTractgram to copy the space_attribute AND state from.
Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i
Dictionary in which each key has X items X being the number of streamlines
is_bbox_in_vox_valid
()Verify that the bounding box is valid in voxel space. Negative coordinates or coordinates above the volume dimensions are considered invalid in voxel space.
Are the streamlines within the volume of the associated reference
remove_invalid_streamlines
(epsilon=0.001)Remove streamlines with invalid coordinates from the object. Will also remove the data_per_point and data_per_streamline. Invalid coordinates are any X,Y,Z values above the reference dimensions or below zero
Epsilon value for the bounding box verification. Default is 1e-6.
Tuple of two list, indices_to_remove, indices_to_keep
to_origin
(target_origin)Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner)
TensorModel
dipy.workflows.stats.
TensorModel
(gtab, fit_method='WLS', return_S0_hat=False, *args, **kwargs)Bases: dipy.reconst.base.ReconstModel
Diffusion Tensor
Methods
|
Fit method of the DTI model class |
|
Predict a signal for this TensorModel class instance given parameters. |
__init__
(gtab, fit_method='WLS', return_S0_hat=False, *args, **kwargs)A Diffusion Tensor Model [1], [2].
str can be one of the following:
dti.wls_fit_tensor()
dti.ols_fit_tensor()
dti.nlls_fit_tensor()
fitting [3]
dti.restore_fit_tensor()
Boolean to return (True) or not (False) the S0 values for the fit.
fit_method. See dti.wls_fit_tensor, dti.ols_fit_tensor for details
The minimum signal value. Needs to be a strictly positive number. Default: minimal signal in the data provided to fit.
Notes
In order to increase speed of processing, tensor fitting is done simultaneously over many voxels. Many fit_methods use the ‘step’ parameter to set the number of voxels that will be fit at once in each iteration. This is the chunk size as a number of voxels. A larger step value should speed things up, but it will also take up more memory. It is advisable to keep an eye on memory consumption as this value is increased.
E.g., in iter_fit_tensor()
we have a default step value of
1e4
References
Basser, P.J., Mattiello, J., LeBihan, D., 1994. Estimation of the effective self-diffusion tensor from the NMR spin echo. J Magn Reson B 103, 247-254.
Basser, P., Pierpaoli, C., 1996. Microstructural and physiological features of tissues elucidated by quantitative diffusion-tensor MRI. Journal of Magnetic Resonance 111, 209-219.
Lin-Ching C., Jones D.K., Pierpaoli, C. 2005. RESTORE: Robust estimation of tensors by outlier rejection. MRM 53: 1088-1095
fit
(data, mask=None)Fit method of the DTI model class
The measured signal from one voxel.
A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1]
predict
(dti_params, S0=1.0)Predict a signal for this TensorModel class instance given parameters.
The last dimension should have 12 tensor parameters: 3 eigenvalues, followed by the 3 eigenvectors
The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1
Workflow
dipy.workflows.stats.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.stats.
anatomical_measures
(bundle, metric, dt, pname, bname, subject, group_id, ind, dir)save it in hd5 file.
Name of bundle being analyzed
dti metric e.g. FA, MD
DataFrame to be populated
Name of the dti metric
Name of bundle being analyzed.
subject number as a string (e.g. 10001)
which group subject belongs to 1 for patient and 0 control
ind tells which disk number a point belong.
path of output directory
dipy.workflows.stats.
assignment_map
(target_bundle, model_bundle, no_disks)Calculates assignment maps of the target bundle with reference to model bundle centroids.
target bundle extracted from subject data in common space
atlas bundle used as reference
Number of disks used for dividing bundle into disks. (Default 100)
References
Chandio, B.Q., Risacher, S.L., Pestilli, F., Bullock, D.,
Yeh, FC., Koudoro, S., Rokem, A., Harezlack, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)
dipy.workflows.stats.
binary_dilation
(input, structure=None, iterations=1, mask=None, output=None, border_value=0, origin=0, brute_force=False)Multidimensional binary dilation with the given structuring element.
Binary array_like to be dilated. Non-zero (True) elements form the subset to be dilated.
Structuring element used for the dilation. Non-zero elements are considered True. If no structuring element is provided an element is generated with a square connectivity equal to one.
The dilation is repeated iterations times (one, by default). If iterations is less than 1, the dilation is repeated until the result does not change anymore. Only an integer of iterations is accepted.
If a mask is given, only those elements with a True value at the corresponding mask element are modified at each iteration.
Array of the same shape as input, into which the output is placed. By default, a new array is created.
Value at the border in the output array.
Placement of the filter, by default 0.
Memory condition: if False, only the pixels whose value was changed in the last iteration are tracked as candidates to be updated (dilated) in the current iteration; if True all pixels are considered as candidates for dilation, regardless of what happened in the previous iteration. False by default.
Dilation of the input by the structuring element.
See also
grey_dilation
, binary_erosion
, binary_closing
, binary_opening
generate_binary_structure
Notes
Dilation [1] is a mathematical morphology operation [2] that uses a structuring element for expanding the shapes in an image. The binary dilation of an image by a structuring element is the locus of the points covered by the structuring element, when its center lies within the non-zero points of the image.
References
Examples
>>> from scipy import ndimage
>>> a = np.zeros((5, 5))
>>> a[2, 2] = 1
>>> a
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])
>>> ndimage.binary_dilation(a)
array([[False, False, False, False, False],
[False, False, True, False, False],
[False, True, True, True, False],
[False, False, True, False, False],
[False, False, False, False, False]], dtype=bool)
>>> ndimage.binary_dilation(a).astype(a.dtype)
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.]])
>>> # 3x3 structuring element with connectivity 1, used by default
>>> struct1 = ndimage.generate_binary_structure(2, 1)
>>> struct1
array([[False, True, False],
[ True, True, True],
[False, True, False]], dtype=bool)
>>> # 3x3 structuring element with connectivity 2
>>> struct2 = ndimage.generate_binary_structure(2, 2)
>>> struct2
array([[ True, True, True],
[ True, True, True],
[ True, True, True]], dtype=bool)
>>> ndimage.binary_dilation(a, structure=struct1).astype(a.dtype)
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.]])
>>> ndimage.binary_dilation(a, structure=struct2).astype(a.dtype)
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 0.]])
>>> ndimage.binary_dilation(a, structure=struct1,\
... iterations=2).astype(a.dtype)
array([[ 0., 0., 1., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 1., 0., 0.]])
dipy.workflows.stats.
buan_bundle_profiles
(model_bundle_folder, bundle_folder, orig_bundle_folder, metric_folder, group_id, subject, no_disks=100, out_dir='')Applies statistical analysis on bundles and saves the results
in a directory specified by out_dir
.
Path to the input model bundle files. This path may contain wildcards to process multiple inputs at once.
Path to the input bundle files in common space. This path may contain wildcards to process multiple inputs at once.
Path to the input bundle files in native space. This path may contain wildcards to process multiple inputs at once.
Path to the input dti metric or/and peak files. It will be used as metric for statistical analysis of bundles.
what group subject belongs to either 0 for control or 1 for patient
subject id e.g. 10001
Number of disks used for dividing bundle into disks. (Default 100)
Output directory (default input file directory)
References
Chandio, B.Q., Risacher, S.L., Pestilli, F., Bullock, D.,
Yeh, FC., Koudoro, S., Rokem, A., Harezlack, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)
dipy.workflows.stats.
bundle_shape_similarity
(bundle1, bundle2, rng, clust_thr=[5, 3, 1.5], threshold=6)Calculates bundle shape similarity between two given bundles using bundle adjacency (BA) metric
White matter tract from one subject (eg: AF_L)
White matter tract from another subject (eg: AF_L)
list of clustering thresholds used in quickbundlesX
Threshold used for in computing bundle adjacency. Threshold controls how much strictness user wants while calculating shape similarity between two bundles. Smaller threshold means bundles should be strictly similar to get higher shape similarity score.
Bundle similarity score between two tracts
References
Chandio, B.Q., Risacher, S.L., Pestilli, F., Bullock, D.,
Yeh, FC., Koudoro, S., Rokem, A., Harezlack, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)
Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
dipy.workflows.stats.
glob
(pathname, *, recursive=False)Return a list of paths matching a pathname pattern.
The pattern may contain simple shell-style wildcards a la fnmatch. However, unlike fnmatch, filenames starting with a dot are special cases that are not matched by ‘*’ and ‘?’ patterns.
If recursive is true, the pattern ‘**’ will match any files and zero or more directories and subdirectories.
dipy.workflows.stats.
gradient_table
(bvals, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01, btens=None)A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.
an array of shape (N,) or (1, N) or (N, 1) with the b-values.
a path for the file which contains an array like the above (1).
an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped.
a path for the file which contains an array like the one at (3).
an array of shape (N, 3) or (3, N) with the b-vectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds (default None)
acquisition pulse duration time in seconds (default None)
All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All b-vectors need to be unit vectors up to a tolerance.
a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.
an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.
an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.
A GradientTable with all the gradient information.
Notes
Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of b-values is 7.
B-vectors should be unit vectors.
Examples
>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False
dipy.workflows.stats.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
dipy.workflows.stats.
load_tractogram
(filename, reference, to_space=<Space.RASMM: 'rasmm'>, to_origin=<Origin.NIFTI: 'center'>, bbox_valid_check=True, trk_header_check=True)Load the stateful tractogram from any format (trk, tck, vtk, fib, dpy)
Filename with valid extension
trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation
Space to which the streamlines will be transformed after loading
NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)
Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.
Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded
The tractogram to load (must have been saved properly)
dipy.workflows.stats.
optional_package
(name, trip_msg=None)Return package-like thing and module setup for package name
package name
message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.
TripWire
instanceIf we can import the package, return it. Otherwise return an object raising an error when accessed
True if import for package was successful, false otherwise
callable usually set as setup_module
in calling namespace, to allow
skipping tests.
Examples
Typical use would be something like this at the top of a module using an optional package:
>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')
Of course in this case the package doesn’t exist, and so, in the module:
>>> have_pkg
False
and
>>> pkg.some_function()
Traceback (most recent call last):
...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError
If the module does exist - we get the module
>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True
Or a submodule if that’s what we asked for
>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True
dipy.workflows.stats.
peak_values
(bundle, peaks, dt, pname, bname, subject, group_id, ind, dir)and quantitative anisotropy (qa) values from peaks object (eg: csa) for every point on a streamline used while tracking and saves it in hd5 file.
Name of bundle being analyzed
contains peak directions and values
DataFrame to be populated
Name of the dti metric
Name of bundle being analyzed.
subject number as a string (e.g. 10001)
which group subject belongs to 1 patient and 0 for control
ind tells which disk number a point belong.
path of output directory
dipy.workflows.stats.
read_bvals_bvecs
(fbvals, fbvecs)Read b-values and b-vectors from disk.
Full path to file with b-values. None to not read bvals.
Full path of file with b-vectors. None to not read bvecs.
Notes
Files can be either ‘.bvals’/’.bvecs’ or ‘.txt’ or ‘.npy’ (containing arrays stored with the appropriate values).
dipy.workflows.stats.
save_nifti
(fname, data, affine, hdr=None)Save a data array into a nifti file.
The full path to the file to be saved.
The array with the data to save.
The affine transform associated with the file.
May contain additional information to store in the file header.
dipy.workflows.stats.
save_tractogram
(sft, filename, bbox_valid_check=True)Save the stateful tractogram in any format (trk, tck, vtk, fib, dpy)
The stateful tractogram to save
Filename with valid extension
Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.
True if the saving operation was successful
dipy.workflows.stats.
segment_from_cfa
(tensor_fit, roi, threshold, return_cfa=False)Segment the cfa inside roi using the values from threshold as bounds.
TensorFit object
A binary mask, which contains the bounding box for the segmentation.
An iterable that defines the min and max values to use for the thresholding. The values are specified as (R_min, R_max, G_min, G_max, B_min, B_max)
If True, the cfa is also returned.
Binary mask of the segmentation.
Array with shape = (…, 3), where … is the shape of tensor_fit. The color fractional anisotropy, ordered as a nd array with the last dimension of size 3 for the R, G and B channels.
dipy.workflows.stats.
transform_streamlines
(streamlines, mat, in_place=False)Apply affine transformation to streamlines
Streamlines object
transformation matrix
If True then change data in place. Be careful changes input streamlines.
Sequence transformed 2D ndarrays of shape[-1]==3
ClosestPeakDirectionGetter
dipy.workflows.tracking.
ClosestPeakDirectionGetter
Bases: dipy.direction.closest_peak_direction_getter.PmfGenDirectionGetter
A direction getter that returns the closest odf peak to previous tracking direction.
Methods
|
Constructor for making a DirectionGetter from an array of Pmfs |
|
Probabilistic direction getter from a distribution of directions on the sphere |
|
Returns best directions at seed location to start tracking. |
get_direction |
CmcStoppingCriterion
dipy.workflows.tracking.
CmcStoppingCriterion
Bases: dipy.tracking.stopping_criterion.AnatomicalStoppingCriterion
Continuous map criterion (CMC) stopping criterion from [1]. This implements the use of partial volume fraction (PVE) maps to determine when the tracking stops.
double interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] include_map, exclude_map double step_size double average_voxel_size double correction_factor
References
“Towards quantitative connectivity analysis: reducing tractography biases.” NeuroImage, 98, 266-278, 2014.
Methods
|
AnatomicalStoppingCriterion from partial volume fraction (PVE) maps. |
check_point |
|
get_exclude |
|
get_include |
DeterministicMaximumDirectionGetter
dipy.workflows.tracking.
DeterministicMaximumDirectionGetter
Bases: dipy.direction.probabilistic_direction_getter.ProbabilisticDirectionGetter
Return direction of a sphere with the highest probability mass function (pmf).
Methods
|
Constructor for making a DirectionGetter from an array of Pmfs |
|
Probabilistic direction getter from a distribution of directions on the sphere |
|
Returns best directions at seed location to start tracking. |
get_direction |
LocalFiberTrackingPAMFlow
dipy.workflows.tracking.
LocalFiberTrackingPAMFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow for Local Fiber Tracking. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(pam_files, stopping_files, seeding_files, use_binary_mask=False, stopping_thr=0.2, seed_density=1, step_size=0.5, tracking_method='eudx', pmf_threshold=0.1, max_angle=30.0, out_dir='', out_tractogram='tractogram.trk', save_seeds=False)Workflow for Local Fiber Tracking.
This workflow use a saved peaks and metrics (PAM) file as input.
wildcards to use multiple masks at once.
Path to images (e.g. FA) used for stopping criterion for tracking.
A binary image showing where we need to seed for tracking.
If True, uses a binary stopping criterion. If the provided stopping_files are not binary, stopping_thr will be used to binarize the images.
Threshold applied to stopping volume’s data to identify where tracking has to stop (default 0.2).
For example, seed_density of 2 means 8 regularly distributed points in the voxel. And seed density of 1 means 1 point at the center of the voxel.
Step size used for tracking (default 0.5mm).
“eudx” (Uses the peaks saved in the pam_files)
“deterministic” or “det” for a deterministic tracking (Uses the sh saved in the pam_files, default)
“probabilistic” or “prob” for a Probabilistic tracking (Uses the sh saved in the pam_files)
“closestpeaks” or “cp” for a ClosestPeaks tracking (Uses the sh saved in the pam_files)
Threshold for ODF functions (default 0.1).
Maximum angle between streamline segments (range [0, 90], default 30).
Output directory (default input file directory).
Name of the tractogram file to be saved (default ‘tractogram.trk’).
If true, save the seeds associated to their streamline in the ‘data_per_streamline’ Tractogram dictionary using ‘seeds’ as the key.
References
Garyfallidis, University of Cambridge, PhD thesis 2012. Amirbekian, University of California San Francisco, PhD thesis 2017.
LocalTracking
dipy.workflows.tracking.
LocalTracking
(direction_getter, stopping_criterion, seeds, affine, step_size, max_cross=None, maxlen=500, fixedstep=True, return_all=True, random_seed=None, save_seeds=False)Bases: object
__init__
(direction_getter, stopping_criterion, seeds, affine, step_size, max_cross=None, maxlen=500, fixedstep=True, return_all=True, random_seed=None, save_seeds=False)Creates streamlines by using local fiber-tracking.
Used to get directions for fiber tracking.
Identifies endpoints and invalid points to inform tracking.
Points to seed the tracking. Seed points should be given in point
space of the track (see affine
).
Coordinate space for the streamline point with respect to voxel indices of input data. This affine can contain scaling, rotational, and translational components but should not contain any shearing. An identity matrix can be used to generate streamlines in “voxel coordinates” as long as isotropic voxels were used to acquire the data.
Step size used for tracking.
The maximum number of direction to track from each seed in crossing voxels. By default all initial directions are tracked.
Maximum number of steps to track from seed. Used to prevent infinite loops.
If true, a fixed stepsize is used, otherwise a variable step size is used.
If true, return all generated streamlines, otherwise only streamlines reaching end points or exiting the image.
The seed for the random seed generator (numpy.random.seed and random.seed).
If True, return seeds alongside streamlines
PFTrackingPAMFlow
dipy.workflows.tracking.
PFTrackingPAMFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Workflow for Particle Filtering Tracking. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(pam_files, wm_files, gm_files, csf_files, seeding_files, step_size=0.2, seed_density=1, pmf_threshold=0.1, max_angle=20.0, pft_back=2, pft_front=1, pft_count=15, out_dir='', out_tractogram='tractogram.trk', save_seeds=False)Workflow for Particle Filtering Tracking.
This workflow use a saved peaks and metrics (PAM) file as input.
wildcards to use multiple masks at once.
Path to white matter partial volume estimate for tracking (CMC).
Path to grey matter partial volume estimate for tracking (CMC).
Path to cerebrospinal fluid partial volume estimate for tracking (CMC).
A binary image showing where we need to seed for tracking.
Step size used for tracking (default 0.2mm).
For example, seed_density of 2 means 8 regularly distributed points in the voxel. And seed density of 1 means 1 point at the center of the voxel.
Threshold for ODF functions (default 0.1).
Maximum angle between streamline segments (range [0, 90], default 20).
Distance in mm to back track before starting the particle filtering tractography (default 2mm). The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist.
Distance in mm to run the particle filtering tractography after the the back track distance (default 1mm). The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist.
Number of particles to use in the particle filter (default 15).
Output directory (default input file directory)
Name of the tractogram file to be saved (default ‘tractogram.trk’)
If true, save the seeds associated to their streamline in the ‘data_per_streamline’ Tractogram dictionary using ‘seeds’ as the key
References
Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M. Towards quantitative connectivity analysis: reducing tractography biases. NeuroImage, 98, 266-278, 2014.
ParticleFilteringTracking
dipy.workflows.tracking.
ParticleFilteringTracking
(direction_getter, stopping_criterion, seeds, affine, step_size, max_cross=None, maxlen=500, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None, save_seeds=False)Bases: dipy.tracking.local_tracking.LocalTracking
__init__
(direction_getter, stopping_criterion, seeds, affine, step_size, max_cross=None, maxlen=500, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None, save_seeds=False)A streamline generator using the particle filtering tractography method [1].
Used to get directions for fiber tracking.
Identifies endpoints and invalid points to inform tracking.
Points to seed the tracking. Seed points should be given in point
space of the track (see affine
).
Coordinate space for the streamline point with respect to voxel indices of input data. This affine can contain scaling, rotational, and translational components but should not contain any shearing. An identity matrix can be used to generate streamlines in “voxel coordinates” as long as isotropic voxels were used to acquire the data.
Step size used for tracking.
The maximum number of direction to track from each seed in crossing voxels. By default all initial directions are tracked.
Maximum number of steps to track from seed. Used to prevent infinite loops.
Distance in mm to back track before starting the particle filtering tractography. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. By default this is set to 2 mm.
Distance in mm to run the particle filtering tractography after the the back track distance. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. By default this is set to 1 mm.
Maximum number of trial for the particle filtering tractography (Prevents infinite loops).
Number of particles to use in the particle filter.
If true, return all generated streamlines, otherwise only streamlines reaching end points or exiting the image.
The seed for the random seed generator (numpy.random.seed and random.seed).
If True, return seeds alongside streamlines
References
Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M. Towards quantitative connectivity analysis: reducing tractography biases. NeuroImage, 98, 266-278, 2014.
ProbabilisticDirectionGetter
dipy.workflows.tracking.
ProbabilisticDirectionGetter
Bases: dipy.direction.closest_peak_direction_getter.PmfGenDirectionGetter
Randomly samples direction of a sphere based on probability mass function (pmf).
The main constructors for this class are current from_pmf and from_shcoeff.
The pmf gives the probability that each direction on the sphere should be
chosen as the next direction. To get the true pmf from the “raw pmf”
directions more than max_angle
degrees from the incoming direction are
set to 0 and the result is normalized.
Methods
|
Constructor for making a DirectionGetter from an array of Pmfs |
|
Probabilistic direction getter from a distribution of directions on the sphere |
|
Returns best directions at seed location to start tracking. |
get_direction |
__init__
()Direction getter from a pmf generator.
Used to get probability mass function for selecting tracking directions.
The maximum allowed angle between incoming direction and new direction.
The set of directions to be used for tracking.
Used to remove direction from the probability mass function for selecting the tracking direction.
Used for extracting initial tracking directions. Passed to peak_directions.
Used for extracting initial tracking directions. Passed to peak_directions.
See also
Space
dipy.workflows.tracking.
Space
Bases: enum.Enum
Enum to simplify future change to convention
StatefulTractogram
dipy.workflows.tracking.
StatefulTractogram
(streamlines, reference, space, origin=<Origin.NIFTI: 'center'>, data_per_point=None, data_per_streamline=None)Bases: object
Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). Facilitate transformation between space and data manipulation for each streamline / point.
affine
Getter for the reference affine
data_per_point
Getter for data_per_point
data_per_streamline
Getter for data_per_streamline
dimensions
Getter for the reference dimensions
origin
Getter for origin standard
space
Getter for the current space
space_attributes
Getter for spatial attribute
streamlines
Partially safe getter for streamlines
voxel_order
Getter for the reference voxel order
voxel_sizes
Getter for the reference voxel sizes
Methods
|
Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency |
Compute the bounding box of the streamlines in their current state |
|
|
Create an instance of StatefulTractogram from another instance of StatefulTractogram. |
Return a list of the data_per_point attribute names |
|
Return a list of the data_per_streamline attribute names |
|
Safe getter for streamlines (for slicing) |
|
Verify that the bounding box is valid in voxel space. |
|
|
Remove streamlines with invalid coordinates from the object. |
Safe function to shift streamlines so the center of voxel is the origin |
|
Safe function to shift streamlines so the corner of voxel is the origin |
|
|
Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner) |
|
Safe function to transform streamlines and update state |
|
Safe function to transform streamlines to a particular space using an enum and update state |
|
Safe function to transform streamlines and update state |
|
Safe function to transform streamlines and update state |
__init__
(streamlines, reference, space, origin=<Origin.NIFTI: 'center'>, data_per_point=None, data_per_streamline=None)Create a strict, state-aware, robust tractogram
Streamlines of the tractogram
Nifti1Header, trk.header (dict) or another Stateful Tractogram Reference that provides the spatial attributes. Typically a nifti-related object from the native diffusion used for streamlines generation
Current space in which the streamlines are (vox, voxmm or rasmm) After tracking the space is VOX, after loading with nibabel the space is RASMM
Current origin in which the streamlines are (center or corner) After loading with nibabel the origin is CENTER
Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i
Dictionary in which each key has X items X being the number of streamlines
Notes
Very important to respect the convention, verify that streamlines match the reference and are effectively in the right space.
Any change to the number of streamlines, data_per_point or data_per_streamline requires particular verification.
In a case of manipulation not allowed by this object, use Nibabel directly and be careful.
are_compatible
(sft_1, sft_2)Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency
compute_bounding_box
()Compute the bounding box of the streamlines in their current state
8 corners of the XYZ aligned box, all zeros if no streamlines
from_sft
(streamlines, sft, data_per_point=None, data_per_streamline=None)Create an instance of StatefulTractogram from another instance of StatefulTractogram.
Streamlines of the tractogram
The other StatefulTractgram to copy the space_attribute AND state from.
Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i
Dictionary in which each key has X items X being the number of streamlines
is_bbox_in_vox_valid
()Verify that the bounding box is valid in voxel space. Negative coordinates or coordinates above the volume dimensions are considered invalid in voxel space.
Are the streamlines within the volume of the associated reference
remove_invalid_streamlines
(epsilon=0.001)Remove streamlines with invalid coordinates from the object. Will also remove the data_per_point and data_per_streamline. Invalid coordinates are any X,Y,Z values above the reference dimensions or below zero
Epsilon value for the bounding box verification. Default is 1e-6.
Tuple of two list, indices_to_remove, indices_to_keep
to_origin
(target_origin)Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner)
ThresholdStoppingCriterion
dipy.workflows.tracking.
ThresholdStoppingCriterion
Bases: dipy.tracking.stopping_criterion.StoppingCriterion
# Declarations from stopping_criterion.pxd bellow cdef:
double threshold, interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] metric_map
Methods
check_point |
Workflow
dipy.workflows.tracking.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.tracking.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
dipy.workflows.tracking.
save_tractogram
(sft, filename, bbox_valid_check=True)Save the stateful tractogram in any format (trk, tck, vtk, fib, dpy)
The stateful tractogram to save
Filename with valid extension
Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.
True if the saving operation was successful
HorizonFlow
dipy.workflows.viz.
HorizonFlow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: dipy.workflows.workflow.Workflow
Methods
|
Create an iterator for IO. |
Return A short name for the workflow used to subdivide. |
|
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
Interactive medical visualization - Invert the Horizon! |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
run
(input_files, cluster=False, cluster_thr=15.0, random_colors=False, length_gt=0, length_lt=1000, clusters_gt=0, clusters_lt=100000000, native_coords=False, stealth=False, emergency_header='icbm_2009a', bg_color=(0, 0, 0), disable_order_transparency=False, buan=False, buan_thr=0.5, buan_highlight=(1, 0, 0), out_dir='', out_stealth_png='tmp.png')Interactive medical visualization - Invert the Horizon!
Interact with any number of .trk, .tck or .dpy tractograms and anatomy files .nii or .nii.gz. Cluster streamlines on loading.
Enable QuickBundlesX clustering
Distance threshold used for clustering. Default value 15.0 for
small animal brains you may need to use something smaller such
as 2.0. The distance is in mm. For this parameter to be active
cluster
should be enabled
Given multiple tractograms have been included then each tractogram will be shown with different color
Clusters with average length greater than length_gt
amount
in mm will be shown
Clusters with average length less than length_lt
amount in
mm will be shown
Clusters with size greater than clusters_gt
will be shown.
Clusters with size less than clusters_gt
will be shown.
Show results in native coordinates
Do not use interactive mode just save figure.
If no anatomy reference is provided an emergency header is provided. Current options ‘icbm_2009a’ and ‘icbm_2009c’.
Define the background color of the scene. Colors can be defined with 1 or 3 values and should be between [0-1]. Default is black (e.g –bg_color 0 0 0 or –bg_color 0).
Default False. Use depth peeling to sort transparent objects. If True also enables anti-aliasing.
Enables BUAN framework visualization. Default is False.
Default 0.5. Uses the threshold value to highlight segments on the bundle which have pvalues less than this threshold.
Define the bundle highlight area color. Colors can be defined with 1 or 3 values and should be between [0-1]. Default is red (e.g –buan_highlight 1 0 0)
Output directory. Default current directory.
Filename of saved picture.
References
Garyfallidis E., M-A. Cote, B.Q. Chandio, S. Fadnavis, J. Guaje, R. Aggarwal, E. St-Onge, K.S. Juneja, S. Koudoro, D. Reagan, DIPY Horizon: fast, modular, unified and adaptive visualization, Proceedings of: International Society of Magnetic Resonance in Medicine (ISMRM), Montreal, Canada, 2019.
Workflow
dipy.workflows.viz.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.viz.
assignment_map
(target_bundle, model_bundle, no_disks)Calculates assignment maps of the target bundle with reference to model bundle centroids.
target bundle extracted from subject data in common space
atlas bundle used as reference
Number of disks used for dividing bundle into disks. (Default 100)
References
Chandio, B.Q., Risacher, S.L., Pestilli, F., Bullock, D.,
Yeh, FC., Koudoro, S., Rokem, A., Harezlack, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)
dipy.workflows.viz.
horizon
(tractograms=None, images=None, pams=None, cluster=False, cluster_thr=15.0, random_colors=False, bg_color=(0, 0, 0), order_transparent=True, length_gt=0, length_lt=1000, clusters_gt=0, clusters_lt=10000, world_coords=True, interactive=True, buan=False, buan_colors=None, out_png='tmp.png', recorded_events=None, return_showm=False)Interactive medical visualization - Invert the Horizon!
StatefulTractograms are used for making sure that the coordinate systems are correct
Each tuple contains data and affine
Contains peak directions and spherical harmonic coefficients
Enable QuickBundlesX clustering
Distance threshold used for clustering. Default value 15.0 for
small animal data you may need to use something smaller such
as 2.0. The threshold is in mm. For this parameter to be active
cluster
should be enabled.
Given multiple tractograms have been included then each tractogram will be shown with different color
Define the background color of the scene. Default is black (0, 0, 0)
Default True. Use depth peeling to sort transparent objects. If True also enables anti-aliasing.
Clusters with average length greater than length_gt
amount
in mm will be shown.
Clusters with average length less than length_lt
amount in mm
will be shown.
Clusters with size greater than clusters_gt
will be shown.
Clusters with size less than clusters_lt
will be shown.
Show data in their world coordinates (not native voxel coordinates) Default True.
Allow user interaction. If False then Horizon goes on stealth mode and just saves pictures.
Enables BUAN framework visualization. Default is False.
List of colors for bundles.
Filename of saved picture.
File path to replay recorded events
Return ShowManager object. Used only at Python level. Can be used for extending Horizon’s cababilities externally and for testing purposes.
References
Garyfallidis E., M-A. Cote, B.Q. Chandio, S. Fadnavis, J. Guaje, R. Aggarwal, E. St-Onge, K.S. Juneja, S. Koudoro, D. Reagan, DIPY Horizon: fast, modular, unified and adaptive visualization, Proceedings of: International Society of Magnetic Resonance in Medicine (ISMRM), Montreal, Canada, 2019.
dipy.workflows.viz.
load_nifti
(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
dipy.workflows.viz.
load_tractogram
(filename, reference, to_space=<Space.RASMM: 'rasmm'>, to_origin=<Origin.NIFTI: 'center'>, bbox_valid_check=True, trk_header_check=True)Load the stateful tractogram from any format (trk, tck, vtk, fib, dpy)
Filename with valid extension
trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation
Space to which the streamlines will be transformed after loading
NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)
Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.
Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded
The tractogram to load (must have been saved properly)
dipy.workflows.viz.
numpy_to_vtk_colors
(colors)Convert Numpy color array to a vtk color array.
Notes
If colors are not already in UNSIGNED_CHAR you may need to multiply by 255.
Examples
>>> import numpy as np
>>> from fury.utils import numpy_to_vtk_colors
>>> rgb_array = np.random.rand(100, 3)
>>> vtk_colors = numpy_to_vtk_colors(255 * rgb_array)
dipy.workflows.viz.
optional_package
(name, trip_msg=None)Return package-like thing and module setup for package name
package name
message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.
TripWire
instanceIf we can import the package, return it. Otherwise return an object raising an error when accessed
True if import for package was successful, false otherwise
callable usually set as setup_module
in calling namespace, to allow
skipping tests.
Examples
Typical use would be something like this at the top of a module using an optional package:
>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')
Of course in this case the package doesn’t exist, and so, in the module:
>>> have_pkg
False
and
>>> pkg.some_function()
Traceback (most recent call last):
...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError
If the module does exist - we get the module
>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True
Or a submodule if that’s what we asked for
>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True
Workflow
dipy.workflows.workflow.
Workflow
(output_strategy='absolute', mix_names=False, force=False, skip=False)Bases: object
Methods
Create an iterator for IO. |
|
Return A short name for the workflow used to subdivide. |
|
Return No sub runs since this is a simple workflow. |
|
Check if a file will be overwritten upon processing the inputs. |
|
|
Execute the workflow. |
__init__
(output_strategy='absolute', mix_names=False, force=False, skip=False)Initialize the basic workflow object.
This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.
get_io_iterator
()Create an iterator for IO.
Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.
get_short_name
()Return A short name for the workflow used to subdivide.
The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.
For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.
Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.
dipy.workflows.workflow.
io_iterator_
(frame, fnc, output_strategy='absolute', mix_names=False)Create an IOIterator using introspection.
Contains the info about the current local variables values.
The function to inspect
Controls the behavior of the IOIterator for output paths.
Whether or not to append a mix of input names at the beginning.