align
align._public
align.bundlemin
align.crosscorr
align.expectmax
align.imaffine
align.imwarp
align.metrics
align.parzenhist
align.reslice
align.scalespace
align.streamlinear
align.sumsqdiff
align.transforms
align.vector_fields
Bunch
floating
AffineMap
AffineRegistration
AffineTransform3D
CCMetric
DiffeomorphicMap
EMMetric
MutualInformationMetric
RigidIsoScalingTransform3D
RigidScalingTransform3D
RigidTransform3D
SSDMetric
StreamlineLinearRegistration
SymmetricDiffeomorphicRegistration
TranslationTransform3D
partial
AffineInvalidValuesError
AffineInversionError
AffineMap
AffineRegistration
IsotropicScaleSpace
MutualInformationMetric
Optimizer
ParzenJointHistogram
ScaleSpace
Bunch
DiffeomorphicMap
DiffeomorphicRegistration
ScaleSpace
SymmetricDiffeomorphicRegistration
floating
CCMetric
EMMetric
SSDMetric
SimilarityMetric
floating
ParzenJointHistogram
IsotropicScaleSpace
ScaleSpace
floating
BundleMinDistanceAsymmetricMetric
BundleMinDistanceMatrixMetric
BundleMinDistanceMetric
BundleSumDistanceMatrixMetric
Optimizer
StreamlineDistanceMetric
StreamlineLinearRegistration
StreamlineRegistrationMap
Streamlines
AffineTransform2D
AffineTransform3D
RigidIsoScalingTransform2D
RigidIsoScalingTransform3D
RigidScalingTransform2D
RigidScalingTransform3D
RigidTransform2D
RigidTransform3D
RotationTransform2D
RotationTransform3D
ScalingTransform2D
ScalingTransform3D
Transform
TranslationTransform2D
TranslationTransform3D
align


alias of 


Find the affine transformation between two 3D images. 

Read a syn registration mapping from a nifti file. 

Register a DWI series to the mean of the B0 images in that series. 

Register DWI data to a template through the B0 volumes. 

Register a series to a reference image. 

Resample an image (moving) from one space to another (static). 

Register two collections of streamlines ('bundles') to each other. 

Register a 2D/3D source image (moving) to a 2D/3D target image (static). 

Write out a syn registration mapping to a nifti file. 
align._public
Registration API: simplified API for registration of MRI data and of streamlines.

Methods 

Methods 
Methods 


Methods 

Methods 

Methods 

Methods 
Methods 

Methods 

Methods 


Methods 

Methods 

Methods 
Methods 

partial(func, *args, **keywords)  new function with partial application of the given arguments and keywords. 


Find the affine transformation between two 3D images. 

Load data and other information from a nifti file. 

Load the stateful tractogram of the .trk format 

Helper function that handles inputs that can be paths, nifti img or arrays 

Read a syn registration mapping from a nifti file. 

Register a DWI series to the mean of the B0 images in that series. 

Register DWI data to a template through the B0 volumes. 

Register a series to a reference image. 

Resample an image (moving) from one space to another (static). 

Save a data array into a nifti file. 
Change the number of points of streamlines 


Register two collections of streamlines ('bundles') to each other. 

Register a 2D/3D source image (moving) to a 2D/3D target image (static). 

Transformation to align the center of mass of the input images. 

Apply a linear transformation, given by affine, to streamlines. 

Issue a warning, or maybe ignore it or raise an exception. 

Write out a syn registration mapping to a nifti file. 
align.bundlemin
Return number of cpus as determined by omp_get_num_procs. 

Determine the effective number of threads to be used for OpenMP calls 

Minimum direct flipped distance matrix between two streamline sets 
align.crosscorr
Utility functions used by the Cross Correlation (CC) metric
Gradient of the CC Metric w.r.t. 

Gradient of the CC Metric w.r.t. 

Gradient of the CC Metric w.r.t. 

Gradient of the CC Metric w.r.t. 

Precomputations to quickly compute the gradient of the CC Metric 

Precomputations to quickly compute the gradient of the CC Metric 

Precomputations to quickly compute the gradient of the CC Metric 

Precomputations to quickly compute the gradient of the CC Metric 
align.expectmax
Demons step for EM metric in 2D 

Demons step for EM metric in 3D 

Computes the mean and std. 

Computes the mean and std. 

Quantizes a 2D image to num_levels quantization levels 

Quantizes a 3D volume to num_levels quantization levels 
align.imaffine
Affine image registration module consisting of the following classes:
transforms between two domains, defined by a static and a moving image. The domain of the transform is the set of points in the static image’s grid, and the codomain is the set of points in the moving image. When we call the transform method, AffineMap maps each point x of the domain (static grid) to the codomain (moving grid) and interpolates the moving image at that point to obtain the intensity value to be placed at x in the resulting grid. The transform_inverse method performs the opposite operation mapping points in the codomain to points in the domain.
intensities of a pair of images, using Parzen windows [Parzen62] with a cubic spline kernel, as proposed by Mattes et al. [Mattes03]. It also computes the gradient of the joint histogram w.r.t. the parameters of a given transform.
information metric the way Optimizer needs them. That is, given a set of transform parameters, it will use ParzenJointHistogram to compute the value and gradient of the joint intensity histogram evaluated at the given parameters, and evaluate the the value and gradient of the histogram’s mutual information.
all the pieces together. It needs to create the scale space of the images and run the multiresolution registration by using the Metric and the Optimizer at each level of the Gaussian pyramid. At each level, it will setup the metric to compute value and gradient of the metric with the input images with different levels of smoothing.
function and the mode. Annals of Mathematical Statistics, 33(3), 10651076, 1962.
& Eubank, W. PETCT image registration in the chest using freeform deformations. IEEE Transactions on Medical Imaging, 22(1), 1208, 2003.

Methods 

Methods 

Methods 

Methods 



Methods 

Methods 
Computes the mutual information and its gradient (if requested) 


Deprecate a renamed or removed function argument. 

Extracts the rotational and spacing components from a matrix 
Bilinear interpolation of a 2D scalar image 

Trilinear interpolation of a 3D scalar image 

Take floor(total_voxels/k) samples from a (2D or 3D) grid 


Transformation to align the center of mass of the input images. 

Transformation to align the geometric center of the input images. 

Transformation to align the origins of the input images. 

Issue a warning, or maybe ignore it or raise an exception. 
align.imwarp
Classes and functions for Symmetric Diffeomorphic Registration



Methods 

Methods 

Methods 

Methods 
alias of 


Extracts the rotational and spacing components from a matrix 

Returns the matrix product A.dot(B) considering None as the identity 
align.metrics
Metrics for Symmetric Diffeomorphic Registration

Methods 

Methods 

Methods 

Methods 
alias of 


Return the gradient of an Ndimensional array. 

Multiresolution GaussSeidel solver using Vtype cycles 

Multiresolution GaussSeidel solver using Vtype cycles 
align.parzenhist

Methods 
Computes the mutual information and its gradient (if requested) 

Evaluates the cubic spline at a set of values 

Evaluates the cubic spline derivative at a set of values 

Take floor(total_voxels/k) samples from a (2D or 3D) grid 
align.reslice

Returns a process pool object 

Apply an affine transformation. 

Determine the effective number of processes for parallelization. 

Reslice data with new voxel resolution defined by 
align.scalespace

Methods 

Methods 
alias of 


Multidimensional Gaussian filter. 
align.streamlinear

Asymmetric Bundlebased Minimum distance. 

Bundlebased Minimum Distance aka BMD 

Bundlebased Minimum Distance aka BMD 

Bundlebased Sum Distance aka BMD 



Methods 

Methods 

Methods 
alias of 


MDFbased pairwise distance optimization function (MIN). 
MDFbased pairwise distance optimization function (MIN). 


MDFbased pairwise distance optimization function (MIN). 

MDF distance optimization function (SUM). 

Move streamlines to the origin 

Return 4x4 transformation matrix from sequence of transformations. 

Compose a 4x4 transformation matrix. 

Compose multiple 4x4 affine transformations in one 4x4 matrix 

Return sequence of transformations from transformation matrix. 

Given a 4x4 homogeneous matrix return the parameter vector. 
Minimum direct flipped distance matrix between two streamline sets 

Euclidean length of streamlines 


Progressive SLR. 

Run QuickBundlesX and then run again on the centroids of the last layer 



Select a random set of streamlines 
Change the number of points of streamlines 


Utility function for registering large tractograms. 

Return the current time in seconds since the Epoch. 

Apply affine transformation to streamlines 

Return the streamlines not as a list but as an array and an offset 

Utility function for registering large tractograms. 
align.sumsqdiff
Utility functions used by the Sum of Squared Differences (SSD) metric
Sum of squared differences between two 2D images 

Sum of squared differences between two 3D volumes 

The residual displacement field to be fit on the next iteration 

The residual displacement field to be fit on the next iteration 

Demons step for 2D SSDdriven registration 

Demons step for 3D SSDdriven registration 

One iteration of a large linear system solver for 2D SSD registration 

One iteration of a large linear system solver for 3D SSD registration 

Solves a 2variable symmetric positivedefinite linear system 

Solves a 3variable symmetric positivedefinite linear system 
align.transforms
Methods 

Methods 

Methods 

Methods 

Methods 

Methods 

Methods 

Methods 

Methods 

Methods 

Methods 

Methods 

Base class (contract) for all transforms for affine image registration Each transform must define the following (fast, nogil) methods: 

Methods 

Methods 
align.vector_fields
Computes the composition of two 2D displacement fields 

Computes the composition of two 3D displacement fields 

Create a binary 2D image where pixel values are 1 iff their distance to the center of the image is less than or equal to radius. 

Creates an invertible 2D displacement field 

Creates an invertible 3D displacement field 

Creates a random 2D displacement 'exactly' mapping points of two grids 

Creates a random 3D displacement 'exactly' mapping points of two grids 

Create a binary 3D image where voxel values are 1 iff their distance to the center of the image is less than or equal to radius. 

Downsamples the 2D input vector field by a factor of 2 Downsamples the input vector field by a factor of 2. 

Downsamples the input 3D vector field by a factor of 2 

Downsamples the input 2D image by a factor of 2 

Downsamples the input volume by a factor of 2 

Gradient of an image in physical space 

Computes the inverse of a 2D displacement fields 

Computes the inverse of a 3D displacement fields 

Linearly transforms all vectors of a 2D displacement field 

Linearly transforms all vectors of a 3D displacement field 

Resamples a 2D vector field to a custom target shape 

Resamples a 3D vector field to a custom target shape 

Simplifies a nonlinear warping function combined with an affine transform 

Simplifies a nonlinear warping function combined with an affine transform 

Gradient of an image in physical space 

Transforms a 2D image by an affine transform with bilinear interp. 

Transforms a 2D image by an affine transform with NN interpolation 

Transforms a 3D volume by an affine transform with trilinear interp. 

Transforms a 3D volume by an affine transform with NN interpolation 

Warps a 2D image using bilinear interpolation 

Warps a 2D image using nearest neighbor interpolation 

Warps a 3D volume using trilinear interpolation 

Warps a 3D volume using using nearestneighbor interpolation 
Bunch
Bases: object
floating
Find the affine transformation between two 3D images. Alternatively, find the combination of several linear transformations.
Containing the data for the moving object, or full path to a nifti file with the moving data.
Containing the data for the static object, or full path to a nifti file with the moving data.
An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
Sequence of transforms to use in the gradual fitting. Default: gradual
fit of the full affine (executed from left to right):
["center_of_mass", "translation", "rigid", "affine"]
Alternatively, any other combination of the following registration
methods might be used: center_of_mass, translation, rigid,
rigid_isoscaling, rigid_scaling and affine.
Initial guess for the transformation between the spaces. Default: identity.
Currently only supports ‘MI’ for MutualInformationMetric.
AffineRegistration keyword argument: the number of iterations at each scale of the scale space. level_iters[0] corresponds to the coarsest scale, level_iters[1] the finest, where n is the length of the sequence. By default, a 3level scale space with iterations sequence equal to [10000, 1000, 100] will be used.
AffineRegistration keyword argument: custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0].
AffineRegistration keyword argument: custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1].
Set it to True to return the value of the optimized coefficients and the optimization quality metric.
moving image mask that defines which pixels in the moving image are used to calculate the mutual information.
static image mask that defines which pixels in the static image are used to calculate the mutual information.
MutualInformationMetric keyword argument: the number of bins to be used for computing the intensity histograms. The default is 32.
MutualInformationMetric keyword argument: There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If sampling_proportion is None, then dense sampling is used. If sampling_proportion is a floating point value in (0,1] then sparse sampling is used, where sampling_proportion specifies the proportion of voxels to be used. The default is None (dense sampling).
Notes
Performs a gradual registration between the two inputs, using a pipeline that gradually approximates the final registration. If the final default step (affine) is ommitted, the resulting affine may not have all 12 degrees of freedom adjusted.
Read a syn registration mapping from a nifti file.
A file of image containing the mapping displacement field in each voxel Shape (x, y, z, 3, 2)
Notes
See write_mapping()
for the data format expected.
Register a DWI series to the mean of the B0 images in that series.
all first registered to the first B0 volume
Diffusion data. Either as a 4D array or as a nifti image object, or as a string containing the full path to a nifti file.
If provided as a tuple of strings, these are assumed to be full paths to the bvals and bvecs files (in that order).
Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will override the affine that is stored in the data input. Default: use the affine stored in data.
Which b0 volume to use as reference. Default: 0
The transformations to perform in sequence (from left to right):
Default: [center_of_mass, translation, rigid, affine]
static image mask that defines which pixels in the static image are used to calculate the mutual information.
Register DWI data to a template through the B0 volumes.
Containing the DWI data, or full path to a nifti file with DWI.
The gradients associated with the DWI data, or a sequence with (fbval, fbvec), full paths to bvals and bvecs files.
An affine transformation associated with the DWI. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
Containing the data for the template, or full path to a nifti file with the template data.
An affine transformation associated with the template. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
One of “syn” or “aff”, which designates which registration method is
used. Either syn, which uses the syn_registration()
function
or affine_registration()
function. Default: “syn”.
syn_registration()
orNotes
This function assumes that the DWI data is already internally registered.
See register_dwi_series()
.
Register a series to a reference image.
The data is 4D with the last dimension separating different 3D volumes
If this is an int, this is the index of the reference image within the series. Otherwise it is an array of data to register with (associated with a ref_affine required) or a nifti img or full path to a file containing one.
Sequence of transforms to do for each volume in the series. Default: (executed from left to right): [center_of_mass, translation, rigid, affine]
The affine. If provided, this input will override the affine provided together with the nifti img or file.
static image mask that defines which pixels in the static image are used to calculate the mutual information.
Resample an image (moving) from one space to another (static).
Containing the data for the moving object, or full path to a nifti file with the moving data.
An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
Containing the data for the static object, or full path to a nifti file with the moving data.
An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
If an additional affine is needed betweeen the two spaces. Default: identity (no additional registration).
Register two collections of streamlines (‘bundles’) to each other.
The two bundles to be registered. Given either as lists of arrays with 3D coordinates, or strings containing full paths to these files.
How many points to resample to. Default: 100.
Whether to return the moving bundle in the original space, but resampled in the static space to n_points.
Streamlines from the moving group, moved to be closely matched to the static group.
The affine transformation that takes us from ‘moving’ to ‘static’
Register a 2D/3D source image (moving) to a 2D/3D target image (static).
Either as a 2D/3D array or as a nifti image object, or as a string containing the full path to a nifti file.
Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will override the affine that is stored in the data input. Default: use the affine stored in data.
The metric to be optimized. One of CC, EM, SSD, Default: ‘CC’ => CCMetric.
The dimensions of the image domain. Default: 3
the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used). Default: [10, 10, 5].
Parameters for initialization of the metric object. If not provided, uses the default settings of each metric.
The data in moving, warped towards the static data.
The vector field describing the forward warping from the source to the target.
The vector field describing the backward warping from the target to the source.
Write out a syn registration mapping to a nifti file.
syn_registration()
Full path to the nifti file storing the mapping
Notes
The data in the file is organized with shape (X, Y, Z, 2, 3, 3), such that the forward mapping in each voxel is in data[i, j, k, 0, :, :] and the backward mapping in each voxel is in data[i, j, k, 0, :, :].
AffineMap
Bases: object
Methods
Return the value of the transformation, not a reference. 


Set the affine transform (operating in physical space). 

Transform the input image from codomain to domain space. 

Transform the input image from domain to codomain space. 
AffineMap.
Implements an affine transformation whose domain is given by domain_grid and domain_grid2world, and whose codomain is given by codomain_grid and codomain_grid2world.
The actual transform is represented by the affine matrix, which operate in world coordinates. Therefore, to transform a moving image towards a static image, we first map each voxel (i,j,k) of the static image to world coordinates (x,y,z) by applying domain_grid2world. Then we apply the affine transform to (x,y,z) obtaining (x’, y’, z’) in moving image’s world coordinates. Finally, (x’, y’, z’) is mapped to voxel coordinates (i’, j’, k’) in the moving image by multiplying (x’, y’, z’) by the inverse of codomain_grid2world. The codomain_grid_shape is used analogously to transform the static image towards the moving image when calling transform_inverse.
If the domain/codomain information is not provided (None) then the sampling information needs to be specified each time the transform or transform_inverse is called to transform images. Note that such sampling information is not necessary to transform points defined in physical space, such as stream lines.
the matrix defining the affine transform, where dim is the dimension of the space this map operates in (2 for 2D images, 3 for 3D images). If None, then self represents the identity transformation.
the shape of the default domain sampling grid. When transform is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None, then the sampling grid shape must be specified each time the transform method is called.
the gridtoworld transform associated with the domain grid. If None (the default), then the gridtoworld transform is assumed to be the identity.
the shape of the default codomain sampling grid. When transform_inverse is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None (the default), then the sampling grid shape must be specified each time the transform_inverse method is called.
the gridtoworld transform associated with the codomain grid. If None (the default), then the gridtoworld transform is assumed to be the identity.
Return the value of the transformation, not a reference.
Copy of the transform, not a reference.
Set the affine transform (operating in physical space).
Also sets self.affine_inv  the inverse of affine, or None if there is no inverse.
the matrix representing the affine transform operating in physical space. The domain and codomain information remains unchanged. If None, then self represents the identity transformation.
Transform the input image from codomain to domain space.
By default, the transformed image is sampled at a grid defined by self.domain_shape and self.domain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.
the image to be transformed
the type of interpolation to be used, either ‘linear’ (for klinear interpolation) or ‘nearest’ for nearest neighbor
the gridtoworld transform associated with image. If None (the default), then the gridtoworld transform is assumed to be the identity.
the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).
the gridtoworld transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the gridtoworld transform is assumed to be the identity.
If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just resampled on the domain grid of this transform.
self.codomain_shape
the transformed image, sampled at the requested grid
Transform the input image from domain to codomain space.
By default, the transformed image is sampled at a grid defined by self.codomain_shape and self.codomain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.
the image to be transformed
the type of interpolation to be used, either ‘linear’ (for klinear interpolation) or ‘nearest’ for nearest neighbor
the gridtoworld transform associated with image. If None (the default), then the gridtoworld transform is assumed to be the identity.
the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).
the gridtoworld transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the gridtoworld transform is assumed to be the identity.
If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just resampled on the domain grid of this transform.
self.codomain_shape
the transformed image, sampled at the requested grid
AffineRegistration
Bases: object
Methods

Start the optimization process. 
Initialize an instance of the AffineRegistration class.
an instance of a metric. The default is None, implying the Mutual Information metric with default settings.
the number of iterations at each scale of the scale space. level_iters[0] corresponds to the coarsest scale, level_iters[1] the finest, where n is the length of the sequence. By default, a 3level scale space with iterations sequence equal to [10000, 1000, 100] will be used.
custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0].
custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1].
optimization method to be used. If Scipy version < 0.12, then only LBFGSB is available. Otherwise, method can be any gradientbased method available in dipy.core.Optimize: CG, BFGS, NewtonCG, dogleg or trustncg. The default is ‘LBFGSB’.
If None, this parameter is not used and an isotropic scale space with the given factors and sigmas will be built. If not None, an anisotropic scale space will be used by automatically selecting the smoothing sigmas along each axis according to the voxel dimensions of the given image. The ss_sigma_factor is used to scale the automatically computed sigmas. For example, in the isotropic case, the sigma of the kernel will be \(factor * (2 ^ i)\) where \(i = 1, 2, ..., n_scales  1\) is the scale (the finest resolution image \(i=0\) is never smoothed). The default is None.
extra optimization options. The default is None, implying no extra options are passed to the optimizer.
Set the verbosity level of the algorithm: 0 : do not print anything 1 : print information about the current status of the algorithm 2 : print high level information of the components involved in
the registration that can be used to detect a failing component.
of a bug.
Default: 1
Start the optimization process.
the image to be used as reference during optimization.
the image to be used as “moving” during optimization. It is necessary to prealign the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “prealigning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
the transformation with respect to whose parameters the gradient must be computed
parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.
the voxeltospace transformation associated with the static image. The default is None, implying the transform is the identity.
the voxeltospace transformation associated with the moving image. The default is None, implying the transform is the identity.
‘mass’: align centers of gravity ‘voxelorigin’: align physical coordinates of voxel (0,0,0) ‘centers’: align physical coordinates of central voxels
array, shape (dim+1, dim+1).
Start from identity.
The default is None.
if True, it returns the parameters for measuring the similarity between the images (default ‘False’). The metric containing optimal parameters and the distance between the images.
static image mask that defines which pixels in the static image are used to calculate the mutual information.
moving image mask that defines which pixels in the moving image are used to calculate the mutual information.
the affine resulting affine transformation
the optimal parameters (translation, rotation shear etc.)
the value of the function at the optimal parameters.
AffineTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
CCMetric
Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. 

Computes one step bringing the moving image towards the static. 

Frees the resources allocated during initialization 

Numerical value assigned by this metric to the current image pair 

Prepares the metric to compute one displacement field iteration. 


Informs the metric how many pyramid levels are above the current one 

Informs the metric how many pyramid levels are below the current one 

Sets the moving image being compared against the static one. 

Sets the static image being compared against the moving one. 

This is called by the optimizer just after setting the moving image 

This is called by the optimizer just after setting the static image. 
Normalized CrossCorrelation Similarity metric.
the dimension of the image domain
be applied to the update field at each iteration
the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation
Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
Computes one step bringing the moving image towards the static.
Computes the update displacement field to be used for registration of the moving image towards the static image
Numerical value assigned by this metric to the current image pair
Returns the Cross Correlation (data term) energy computed at the largest iteration
Prepares the metric to compute one displacement field iteration.
Precomputes the crosscorrelation factors for efficient computation of the gradient of the Cross Correlation w.r.t. the displacement field. It also precomputes the image gradients in the physical space by reorienting the gradients in the voxel space using the corresponding affine transformations.
DiffeomorphicMap
Bases: object
Methods

Creates a zero displacement field 
Inversion error of the displacement fields 


Expands the displacement fields from current shape to new_shape 
Deformation field to transform an image in the backward direction 

Deformation field to transform an image in the forward direction 

Constructs a simplified version of this Diffeomorhic Map 


Try to interpret obj as a matrix 

Inverse of this DiffeomorphicMap instance 
Shallow copy of this DiffeomorphicMap instance 


Warps an image in the forward direction 

Warps an image in the backward direction 

Composition of this DiffeomorphicMap with a given endomorphism 
DiffeomorphicMap
Implements a diffeomorphic transformation on the physical space. The deformation fields encoding the direct and inverse transformations share the same domain discretization (both the discretization grid shape and voxeltospace matrix). The input coordinates (physical coordinates) are first aligned using prealign, and then displaced using the corresponding vector field interpolated at the aligned coordinates.
the transformation’s dimension
the number of slices (if 3D), rows and columns of the deformation field’s discretization
grid and space
the number of slices (if 3D), rows and columns of the default discretization of this map’s domain
the default voxeltospace transformation between this map’s discretization and physical space
the number of slices (if 3D), rows and columns of the images that are ‘normally’ warped using this transformation in the forward direction (this will provide default transformation parameters to warp images under this transformation). By default, we assume that the inverse transformation is ‘normally’ used to warp images with the same discretization and voxeltospace transformation as the deformation field grid.
the voxeltospace transformation of images that are ‘normally’ warped using this transformation (in the forward direction).
the linear transformation to be applied to align input images to the reference space before warping under the deformation field.
Creates a zero displacement field
Creates a zero displacement field (the identity transformation).
Inversion error of the displacement fields
Estimates the inversion error of the displacement fields by computing statistics of the residual vectors obtained after composing the forward and backward displacement fields.
the displacement field resulting from composing the forward and backward displacement fields of this transformation (the residual should be zero for a perfect diffeomorphism)
statistics from the norms of the vectors of the residual displacement field: maximum, mean and standard deviation
Notes
Since the forward and backward displacement fields have the same discretization, the final composition is given by
comp[i] = forward[ i + Dinv * backward[i]]
where Dinv is the spacetogrid transformation of the displacement fields
Expands the displacement fields from current shape to new_shape
Upsamples the discretization of the displacement fields to be of new_shape shape.
the factors scaling current spacings (voxel sizes) to spacings in the expanded discretization.
the shape of the arrays holding the upsampled discretization
Deformation field to transform an image in the backward direction
Returns the deformation field that must be used to warp an image under this transformation in the backward direction (note the ‘is_inverse’ flag).
Deformation field to transform an image in the forward direction
Returns the deformation field that must be used to warp an image under this transformation in the forward direction (note the ‘is_inverse’ flag).
Constructs a simplified version of this Diffeomorhic Map
The simplified version incorporates the prealign transform, as well as the domain and codomain affine transforms into the displacement field. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. As a result, self.prealign, self.disp_grid2world, self.domain_grid2world and self.codomain affine will be None (denoting Identity) in the resulting diffeomorphic map.
Try to interpret obj as a matrix
Some operations are performed faster if we know in advance if a matrix is the identity (so we can skip the actual matrixvector multiplication). This function returns None if the given object is None or the ‘identity’ string. It returns the same object if it is a numpy array. It raises an exception otherwise.
any object
the same object given as argument if obj is None or a numpy array. None if obj is the ‘identity’ string.
Inverse of this DiffeomorphicMap instance
Returns a diffeomorphic map object representing the inverse of this transformation. The internal arrays are not copied but just referenced.
the inverse of this diffeomorphic map.
Shallow copy of this DiffeomorphicMap instance
Creates a shallow copy of this diffeomorphic map (the arrays are not copied but just referenced)
the shallow copy of this diffeomorphic map
Warps an image in the forward direction
Transforms the input image under this transformation in the forward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform(…) warps the image forwards, else it warps the image backwards).
the image to be warped under this transformation in the forward direction
the type of interpolation to be used for warping, either ‘linear’ (for klinear interpolation) or ‘nearest’ for nearest neighbor
the transformation bringing world (space) coordinates to voxel coordinates of the image given as input
the number of slices, rows and columns of the desired warped image
warped image to physical space
the warped image under this transformation in the forward direction
Notes
See _warp_forward and _warp_backward documentation for further information.
Warps an image in the backward direction
Transforms the input image under this transformation in the backward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform_inverse(…) warps the image backwards, else it warps the image forwards)
the image to be warped under this transformation in the forward direction
the type of interpolation to be used for warping, either ‘linear’ (for klinear interpolation) or ‘nearest’ for nearest neighbor
the transformation bringing world (space) coordinates to voxel coordinates of the image given as input
the number of slices, rows, and columns of the desired warped image
warped image to physical space
warped image under this transformation in the backward direction
Notes
See _warp_forward and _warp_backward documentation for further information.
Composition of this DiffeomorphicMap with a given endomorphism
Creates a new DiffeomorphicMap C with the same properties as self and composes its displacement fields with phi’s corresponding fields. The resulting diffeomorphism is of the form C(x) = phi(self(x)) with inverse C^{1}(y) = self^{1}(phi^{1}(y)). We assume that phi is an endomorphism with the same discretization and domain affine as self to ensure that the composition inherits self’s properties (we also assume that the prealigning matrix of phi is None or identity).
the endomorphism to be warped by this diffeomorphic map
endomorphism given as input
Notes
The problem with our current representation of a DiffeomorphicMap is that the set of Diffeomorphism that can be represented this way (a prealigning matrix followed by a nonlinear endomorphism given as a displacement field) is not closed under the composition operation.
Supporting a general DiffeomorphicMap class, closed under composition, may be extremely costly computationally, and the kind of transformations we actually need for Avants’ midpoint algorithm (SyN) are much simpler.
EMMetric
Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. 


Demons step for EM metric 
Computes one step bringing the reference image towards the static. 


Computes the GaussNewton energy minimization step 
Frees the resources allocated during initialization 

The numerical value assigned by this metric to the current image pair 

Prepares the metric to compute one displacement field iteration. 


Informs the metric how many pyramid levels are above the current one 

Informs the metric how many pyramid levels are below the current one 

Sets the moving image being compared against the static one. 

Sets the static image being compared against the moving one. 
This is called by the optimizer just after setting the moving image. 

This is called by the optimizer just after setting the static image. 
ExpectationMaximization Metric
Similarity metric based on the ExpectationMaximization algorithm to handle multimodal images. The transfer function is modeled as a set of hidden random variables that are estimated at each iteration of the algorithm.
the dimension of the image domain
smoothness parameter, the larger the value the smoother the deformation field
number of iterations to be performed at each level of the multi resolution GaussSeidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric)
variables in the EM algorithm)
if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image.
the optimization schedule to be used in the multiresolution GaussSeidel optimization algorithm (not used if Demons Step is selected)
Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
Demons step for EM metric
if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Demons step
Computes one step bringing the reference image towards the static.
Computes the forward update field to register the moving image towards the static image in a gradientbased optimization algorithm
Computes the GaussNewton energy minimization step
Computes the Newton step to minimize this energy, i.e., minimizes the linearized energy function with respect to the regularized displacement field (this step does not require postsmoothing, as opposed to the demons step, which does not include regularization). To accelerate convergence we use the multigrid GaussSeidel algorithm proposed by Bruhn and Weickert et al [Bruhn05]
if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Newton step
References
estimation: combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
The numerical value assigned by this metric to the current image pair
Returns the EM (data term) energy computed at the largest iteration
Prepares the metric to compute one displacement field iteration.
Precomputes the transfer functions (hidden random variables) and variances of the estimators. Also precomputes the gradient of both input images. Note that once the images are transformed to the opposite modality, the gradient of the transformed images can be used with the gradient of the corresponding modality in the same fashion as diffdemons does for monomodality images. If the flag self.use_double_gradient is True these gradients are averaged.
This is called by the optimizer just after setting the moving image.
EMMetric takes advantage of the image dynamics by computing the current moving image mask from the original_moving_image mask (warped by nearest neighbor interpolation)
the original moving image from which the current moving image was generated, the current moving image is the one that was provided via ‘set_moving_image(…)’, which may not be the same as the original moving image but a warped version of it.
the transformation that was applied to the original_moving_image to generate the current moving image
This is called by the optimizer just after setting the static image.
EMMetric takes advantage of the image dynamics by computing the current static image mask from the originalstaticImage mask (warped by nearest neighbor interpolation)
the original static image from which the current static image was generated, the current static image is the one that was provided via ‘set_static_image(…)’, which may not be the same as the original static image but a warped version of it (even the static image changes during Symmetric Normalization, not only the moving one).
the transformation that was applied to the original_static_image to generate the current static image
MutualInformationMetric
Bases: object
Methods

Numeric value of the negative Mutual Information. 

Numeric value of the metric and its gradient at given parameters. 

Numeric value of the metric's gradient at the given parameters. 

Prepare the metric to compute intensity densities and gradients. 
Initialize an instance of the Mutual Information metric.
This class implements the methods required by Optimizer to drive the registration process.
the number of bins to be used for computing the intensity histograms. The default is 32.
There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If sampling_proportion is None, then dense sampling is used. If sampling_proportion is a floating point value in (0,1] then sparse sampling is used, where sampling_proportion specifies the proportion of voxels to be used. The default is None.
Notes
Since we use linear interpolation, images are not, in general, differentiable at exact voxel coordinates, but they are differentiable between voxel coordinates. When using sparse sampling, selected voxels are slightly moved by adding a small random displacement within one voxel to prevent sampling points from being located exactly at voxel coordinates. When using dense sampling, this random displacement is not applied.
Numeric value of the negative Mutual Information.
We need to change the sign so we can use standard minimization algorithms.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters
Numeric value of the metric and its gradient at given parameters.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters
the gradient of the negative Mutual Information
Numeric value of the metric’s gradient at the given parameters.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the gradient of the negative Mutual Information
Prepare the metric to compute intensity densities and gradients.
The histograms will be setup to compute probability densities of intensities within the minimum and maximum values of static and moving
the transformation with respect to whose parameters the gradient must be computed
static image
moving image. The dimensions of the static (S, R, C) and moving (S’, R’, C’) images do not need to be the same.
the gridtospace transform of the static image. The default is None, implying the transform is the identity.
the gridtospace transform of the moving image. The default is None, implying the spacing along all axes is 1.
the prealigning matrix (an affine transform) that roughly aligns the moving image towards the static image. If None, no prealignment is performed. If a prealignment matrix is available, it is recommended to provide this matrix as starting_affine instead of manually transforming the moving image to reduce interpolation artifacts. The default is None, implying no prealignment is performed.
static image mask that defines which pixels in the static image are used to calculate the mutual information.
moving image mask that defines which pixels in the moving image are used to calculate the mutual information.
RigidIsoScalingTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Rigid isoscaling transform in 3D.
(rotation + translation + scaling) The parameter vector theta of length 7 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : isotropic scaling
RigidScalingTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Rigid Scaling transform in 3D (rotation + translation + scaling).
The parameter vector theta of length 9 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : scaling in the x axis theta[7] : scaling in the y axis theta[8] : scaling in the z axis
RigidTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Rigid transform in 3D (rotation + translation) The parameter vector theta of length 6 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis
SSDMetric
Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. 


Demons step for SSD metric 
Computes one step bringing the reference image towards the static. 


Computes the GaussNewton energy minimization step 
Nothing to free for the SSD metric 

The numerical value assigned by this metric to the current image pair 

Prepares the metric to compute one displacement field iteration. 


Informs the metric how many pyramid levels are above the current one 

Informs the metric how many pyramid levels are below the current one 

Sets the moving image being compared against the static one. 

Sets the static image being compared against the moving one. 

This is called by the optimizer just after setting the moving image 

This is called by the optimizer just after setting the static image. 
Sum of Squared Differences (SSD) Metric
Similarity metric for (monomodal) nonlinear image registration defined by the sum of squared differences (SSD)
the dimension of the image domain
smoothness parameter, the larger the value the smoother the deformation field
number of iterations to be performed at each level of the multi resolution GaussSeidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric)
the displacement field step to be computed when ‘compute_forward’ and ‘compute_backward’ are called. Either ‘demons’ or ‘gauss_newton’
Computes one step bringing the static image towards the moving.
Computes the updated displacement field to be used for registration of the static image towards the moving image
Demons step for SSD metric
Computes the demons step proposed by Vercauteren et al.[Vercauteren09] for the SSD metric.
if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Demons step
References
Nicholas Ayache, “Diffeomorphic Demons: Efficient Nonparametric Image Registration”, Neuroimage 2009
Computes one step bringing the reference image towards the static.
Computes the update displacement field to be used for registration of the moving image towards the static image
Computes the GaussNewton energy minimization step
Minimizes the linearized energy function (Newton step) defined by the sum of squared differences of corresponding pixels of the input images with respect to the displacement field.
if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
if forward_step==True, the forward SSD GaussNewton step, else, the backward step
StreamlineLinearRegistration
Bases: object
Methods

Find the minimum of the provided metric. 
Linear registration of 2 sets of streamlines [Garyfallidis15].
If None and fast is False then the BMD distance is used. If fast is True then a faster implementation of BMD is used. Otherwise, use the given distance metric.
Initial parametrization for the optimization.
a) 6 elements then only rigid registration is performed with the 3 first elements for translation and 3 for rotation. b) 7 elements also isotropic scaling is performed (similarity). c) 12 elements then translation, rotation (in degrees), scaling and shearing is performed (affine).
Here is an example of x0 with 12 elements:
x0=np.array([0, 10, 0, 40, 0, 0, 2., 1.5, 1, 0.1, 0.5, 0])
This has translation (0, 10, 0), rotation (40, 0, 0) in degrees, scaling (2., 1.5, 1) and shearing (0.1, 0.5, 0).
x0 = np.array([0, 0, 0, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0, 1.])
x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0, 1.])
x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])
‘L_BFGS_B’ or ‘Powell’ optimizers can be used. Default is ‘L_BFGS_B’.
If method == ‘L_BFGS_B’ then we can use bounded optimization. For example for the six parameters of rigid rotation we can set the bounds = [(30, 30), (30, 30), (30, 30),
(45, 45), (45, 45), (45, 45)]
That means that we have set the bounds for the three translations and three rotation axes (in degrees).
If True, if True then information about the optimization is shown. Default: False.
Extra options to be used with the selected method.
If True save the transformation for each iteration of the optimizer. Default is False. Supported only with Scipy >= 0.11.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
References
Garyfallidis et al. “Robust and efficient linear registration of whitematter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015
Garyfallidis et al., “Direct nativespace fiber bundle alignment for group comparisons”, ISMRM, 2014.
Garyfallidis et al. Recognition of white matter bundles using local and global streamlinebased registration and clustering, Neuroimage, 2017.
Find the minimum of the provided metric.
Reference or fixed set of streamlines.
Moving set of streamlines.
Transformation (4, 4) matrix to start the registration. mat
is applied to moving. Default value None which means that initial
transformation will be generated by shifting the centers of moving
and static sets of streamlines to the origin.
SymmetricDiffeomorphicRegistration
Bases: dipy.align.imwarp.DiffeomorphicRegistration
Methods

Return the resulting diffeomorphic map. 

Starts the optimization 

Sets the number of iterations at each pyramid level 

Composition of the current displacement field with the given field 
Symmetric Diffeomorphic Registration (SyN) Algorithm
Performs the multiresolution optimization algorithm for nonlinear registration using a given similarity metric.
the metric to be optimized
the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used)
the optimization will stop when the estimated derivative of the energy profile w.r.t. time falls below this threshold
the number of iterations to be performed by the displacement field inversion algorithm
the length of the maximum displacement vector of the update displacement field at each iteration
parameter of the scalespace smoothing kernel. For example, the std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, …, n_scales is the scale
the displacement field inversion algorithm will stop iterating when the inversion error falls below this threshold
a function receiving a SymmetricDiffeomorphicRegistration object to be called after each iteration (this optimizer will call this function passing self as parameter)
Return the resulting diffeomorphic map.
Returns the DiffeomorphicMap registering the moving image towards the static image.
Starts the optimization
the image to be used as reference during optimization. The displacement fields will have the same discretization as the static image.
the image to be used as “moving” during optimization. Since the deformation fields’ discretization is the same as the static image, it is necessary to prealign the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “prealigning” the moving image towards the static using an affine transformation given by the ‘prealign’ matrix
the voxeltospace transformation associated to the static image
the voxeltospace transformation associated to the moving image
the affine transformation (operating on the physical space) prealigning the moving image towards the static
the diffeomorphic map that brings the moving image towards the static one in the forward direction (i.e. by calling static_to_ref.transform) and the static image towards the moving one in the backward direction (i.e. by calling static_to_ref.transform_inverse).
Composition of the current displacement field with the given field
Interpolates new displacement at the locations defined by current_displacement. Equivalently, computes the composition C of the given displacement fields as C(x) = B(A(x)), where A is current_displacement and B is new_displacement. This function is intended to be used with deformation fields of the same sampling (e.g. to be called by a registration algorithm).
the displacement field defining where to interpolate new_displacement
the displacement field to be warped by current_displacement
the spacetogrid transform associated with the displacements’ grid (we assume that both displacements are discretized over the same grid)
scaling factor applied to d2. The effect may be interpreted as moving d1 displacements along a factor (time_scaling) of d2.
the warped displacement field
TranslationTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
partial
Find the affine transformation between two 3D images. Alternatively, find the combination of several linear transformations.
Containing the data for the moving object, or full path to a nifti file with the moving data.
Containing the data for the static object, or full path to a nifti file with the moving data.
An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
Sequence of transforms to use in the gradual fitting. Default: gradual
fit of the full affine (executed from left to right):
["center_of_mass", "translation", "rigid", "affine"]
Alternatively, any other combination of the following registration
methods might be used: center_of_mass, translation, rigid,
rigid_isoscaling, rigid_scaling and affine.
Initial guess for the transformation between the spaces. Default: identity.
Currently only supports ‘MI’ for MutualInformationMetric.
AffineRegistration keyword argument: the number of iterations at each scale of the scale space. level_iters[0] corresponds to the coarsest scale, level_iters[1] the finest, where n is the length of the sequence. By default, a 3level scale space with iterations sequence equal to [10000, 1000, 100] will be used.
AffineRegistration keyword argument: custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0].
AffineRegistration keyword argument: custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1].
Set it to True to return the value of the optimized coefficients and the optimization quality metric.
moving image mask that defines which pixels in the moving image are used to calculate the mutual information.
static image mask that defines which pixels in the static image are used to calculate the mutual information.
MutualInformationMetric keyword argument: the number of bins to be used for computing the intensity histograms. The default is 32.
MutualInformationMetric keyword argument: There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If sampling_proportion is None, then dense sampling is used. If sampling_proportion is a floating point value in (0,1] then sparse sampling is used, where sampling_proportion specifies the proportion of voxels to be used. The default is None (dense sampling).
Notes
Performs a gradual registration between the two inputs, using a pipeline that gradually approximates the final registration. If the final default step (affine) is ommitted, the resulting affine may not have all 12 degrees of freedom adjusted.
Load data and other information from a nifti file.
Full path to a nifti file.
Whether to return the nibabel nifti img object. Default: False
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header aff2axcodes. Default: False
convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)
See also
load_nifti_data
Load the stateful tractogram of the .trk format
Filename with valid extension
trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a niftirelated object from the native diffusion used for streamlines generation
Space to which the streamlines will be transformed after loading
NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)
Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.
Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded
The tractogram to load (must have been saved properly)
Helper function that handles inputs that can be paths, nifti img or arrays
Either as a 3D/4D array or as a nifti image object, or as a string containing the full path to a nifti file.
Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will override the affine that is stored in the data input. Default: use the affine stored in data.
Read a syn registration mapping from a nifti file.
A file of image containing the mapping displacement field in each voxel Shape (x, y, z, 3, 2)
Notes
See write_mapping()
for the data format expected.
Register a DWI series to the mean of the B0 images in that series.
all first registered to the first B0 volume
Diffusion data. Either as a 4D array or as a nifti image object, or as a string containing the full path to a nifti file.
If provided as a tuple of strings, these are assumed to be full paths to the bvals and bvecs files (in that order).
Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will override the affine that is stored in the data input. Default: use the affine stored in data.
Which b0 volume to use as reference. Default: 0
The transformations to perform in sequence (from left to right):
Default: [center_of_mass, translation, rigid, affine]
static image mask that defines which pixels in the static image are used to calculate the mutual information.
Register DWI data to a template through the B0 volumes.
Containing the DWI data, or full path to a nifti file with DWI.
The gradients associated with the DWI data, or a sequence with (fbval, fbvec), full paths to bvals and bvecs files.
An affine transformation associated with the DWI. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
Containing the data for the template, or full path to a nifti file with the template data.
An affine transformation associated with the template. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
One of “syn” or “aff”, which designates which registration method is
used. Either syn, which uses the syn_registration()
function
or affine_registration()
function. Default: “syn”.
syn_registration()
orNotes
This function assumes that the DWI data is already internally registered.
See register_dwi_series()
.
Register a series to a reference image.
The data is 4D with the last dimension separating different 3D volumes
If this is an int, this is the index of the reference image within the series. Otherwise it is an array of data to register with (associated with a ref_affine required) or a nifti img or full path to a file containing one.
Sequence of transforms to do for each volume in the series. Default: (executed from left to right): [center_of_mass, translation, rigid, affine]
The affine. If provided, this input will override the affine provided together with the nifti img or file.
static image mask that defines which pixels in the static image are used to calculate the mutual information.
Resample an image (moving) from one space to another (static).
Containing the data for the moving object, or full path to a nifti file with the moving data.
An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
Containing the data for the static object, or full path to a nifti file with the moving data.
An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will override the affine that is in the nifti.
If an additional affine is needed betweeen the two spaces. Default: identity (no additional registration).
Save a data array into a nifti file.
The full path to the file to be saved.
The array with the data to save.
The affine transform associated with the file.
May contain additional information to store in the file header.
(either by downsampling or upsampling)
Change the number of points of streamlines in order to obtain nb_points1 segments of equal length. Points of streamlines will be modified along the curve.
dipy.tracking.Streamlines
If ndarray, must have shape (N,3) where N is the number of points
of the streamline.
If list, each item must be ndarray shape (Ni,3) where Ni is the number
of points of streamline i.
If dipy.tracking.Streamlines
, its common_shape must be 3.
integer representing number of points wanted along the curve.
dipy.tracking.Streamlines
Results of the downsampling or upsampling process.
Examples
>>> from dipy.tracking.streamline import set_number_of_points
>>> import numpy as np
One streamline, a semicircle:
>>> theta = np.pi*np.linspace(0, 1, 100)
>>> x = np.cos(theta)
>>> y = np.sin(theta)
>>> z = 0 * x
>>> streamline = np.vstack((x, y, z)).T
>>> modified_streamline = set_number_of_points(streamline, 3)
>>> len(modified_streamline)
3
Multiple streamlines:
>>> streamlines = [streamline, streamline[::2]]
>>> new_streamlines = set_number_of_points(streamlines, 10)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in new_streamlines]
[10, 10]
Register two collections of streamlines (‘bundles’) to each other.
The two bundles to be registered. Given either as lists of arrays with 3D coordinates, or strings containing full paths to these files.
How many points to resample to. Default: 100.
Whether to return the moving bundle in the original space, but resampled in the static space to n_points.
Streamlines from the moving group, moved to be closely matched to the static group.
The affine transformation that takes us from ‘moving’ to ‘static’
Register a 2D/3D source image (moving) to a 2D/3D target image (static).
Either as a 2D/3D array or as a nifti image object, or as a string containing the full path to a nifti file.
Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will override the affine that is stored in the data input. Default: use the affine stored in data.
The metric to be optimized. One of CC, EM, SSD, Default: ‘CC’ => CCMetric.
The dimensions of the image domain. Default: 3
the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used). Default: [10, 10, 5].
Parameters for initialization of the metric object. If not provided, uses the default settings of each metric.
The data in moving, warped towards the static data.
The vector field describing the forward warping from the source to the target.
The vector field describing the backward warping from the target to the source.
Transformation to align the center of mass of the input images.
static image
the voxeltospace transformation of the static image
moving image
the voxeltospace transformation of the moving image
the affine transformation (translation only, in this case) aligning the center of mass of the moving image towards the one of the static image
Apply a linear transformation, given by affine, to streamlines.
Either streamlines (list, ArraySequence) or a tuple with streamlines and seeds together
The mapping between voxel indices and the point space for seeds. The voxel_to_rasmm matrix, typically from a NIFTI file.
If set, seeds associated to streamlines will be also moved and returned
A generator for the sequence of transformed streamlines. If save_seeds is True, also return a generator for the transformed seeds.
Write out a syn registration mapping to a nifti file.
syn_registration()
Full path to the nifti file storing the mapping
Notes
The data in the file is organized with shape (X, Y, Z, 2, 3, 3), such that the forward mapping in each voxel is in data[i, j, k, 0, :, :] and the backward mapping in each voxel is in data[i, j, k, 0, :, :].
Determine the effective number of threads to be used for OpenMP calls
For num_threads = None
,
 if the OMP_NUM_THREADS
environment variable is set, return that
value
 otherwise, return the maximum number of cores retrieved by
openmp.opm_get_num_procs()
.
For num_threads > 0
, return this value.
For num_threads < 0
, return the maximal number of threads minus
num_threads + 1
. In particular num_threads = 1
will use as
many threads as there are available cores on the machine.
For num_threads = 0
a ValueError is raised.
Desired number of threads to be used.
Minimum direct flipped distance matrix between two streamline sets
All streamlines need to have the same number of points
of streamlines as arrays, [(N, 3) .. (N, 3)]
of streamlines as arrays, [(N, 3) .. (N, 3)]
distance matrix
Gradient of the CC Metric w.r.t. the backward transformation
Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) [Rb7e0e20c1a76Avants2008] w.r.t. the displacement associated to the static image (‘forward’ step) as in [Rb7e0e20c1a76Avants2011]
the gradient of the moving image
the precomputed cross correlation terms obtained via precompute_cc_factors_2d
the gradient of the cross correlation metric with respect to the displacement associated to the static image
References
Gradient of the CC Metric w.r.t. the backward transformation
Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) [Avants08] w.r.t. the displacement associated to the static volume (‘backward’ step) as in [Avants11]
the gradient of the moving volume
the precomputed cross correlation terms obtained via precompute_cc_factors_3d
the radius of the neighborhood used for the CC metric when computing the factors. The returned vector field will be zero along a boundary of width radius voxels.
the gradient of the cross correlation metric with respect to the displacement associated to the static volume
References
Symmetric Diffeomorphic Image Registration with CrossCorrelation: Evaluating Automated Labeling of Elderly and Neurodegenerative Brain, Med Image Anal. 12(1), 2641.
Advanced Normalization Tools (ANTS), 135.
Gradient of the CC Metric w.r.t. the forward transformation
Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) [R1684b0862e45Avants2008] w.r.t. the displacement associated to the moving image (‘backward’ step) as in [R1684b0862e45Avants2011]
the gradient of the static image
the precomputed cross correlation terms obtained via precompute_cc_factors_2d
the gradient of the cross correlation metric with respect to the displacement associated to the moving image
Notes
Currently, the gradient of the static image is not being used, but some authors suggest that symmetrizing the gradient by including both, the moving and static gradients may improve the registration quality. We are leaving this parameters as a placeholder for future investigation
References
Gradient of the CC Metric w.r.t. the forward transformation
Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) [R418c58b05724Avants2008] w.r.t. the displacement associated to the moving volume (‘forward’ step) as in [R418c58b05724Avants2011]
the gradient of the static volume
the precomputed cross correlation terms obtained via precompute_cc_factors_3d
the radius of the neighborhood used for the CC metric when computing the factors. The returned vector field will be zero along a boundary of width radius voxels.
the gradient of the cross correlation metric with respect to the displacement associated to the moving volume
References
Precomputations to quickly compute the gradient of the CC Metric
Precomputes the separate terms of the cross correlation metric [Rc27aa752565cAvants2008] and image norms at each voxel considering a neighborhood of the given radius to efficiently [Rc27aa752565cAvants2011] compute the gradient of the metric with respect to the deformation field.
the static volume, which also defines the reference registration domain
the moving volume (notice that both images must already be in a common reference domain, i.e. the same R, C)
the precomputed cross correlation terms: factors[:,:,0] : static minus its mean value along the neighborhood factors[:,:,1] : moving minus its mean value along the neighborhood factors[:,:,2] : sum of the pointwise products of static and moving
along the neighborhood
factors[:,:,3] : sum of sq. values of static along the neighborhood factors[:,:,4] : sum of sq. values of moving along the neighborhood
References
Precomputations to quickly compute the gradient of the CC Metric
Precomputes the separate terms of the cross correlation metric and image norms at each voxel considering a neighborhood of the given radius to efficiently compute the gradient of the metric with respect to the deformation field [Reda22c03a495Ocegueda2016] [Reda22c03a495Avants2008] [Reda22c03a495Avants2011].
the static volume, which also defines the reference registration domain
the moving volume (notice that both images must already be in a common reference domain, i.e. the same S, R, C)
the precomputed cross correlation terms: factors[:,:,:,0] : static minus its mean value along the neighborhood factors[:,:,:,1] : moving minus its mean value along the neighborhood factors[:,:,:,2] : sum of the pointwise products of static and moving
along the neighborhood
factors[:,:,:,3] : sum of sq. values of static along the neighborhood factors[:,:,:,4] : sum of sq. values of moving along the neighborhood
References
Precomputations to quickly compute the gradient of the CC Metric
This version of precompute_cc_factors_3d is for testing purposes, it directly computes the local crosscorrelation factors without any optimization, so it is less errorprone than the accelerated version.
Demons step for EM metric in 2D
Computes the demons step [Vercauteren09] for SSDdriven registration ( eq. 4 in [Vercauteren09] ) using the EM algorithm [Arce14] to handle multimodality images.
In this case, \(\sigma_i\) in eq. 4 of [Vercauteren] is estimated using the EM algorithm, while in the original version of diffeomorphic demons it is estimated by the difference between the image values at each pixel.
contains, at each pixel, the difference between the moving image (warped under the current deformation s(. , .) ) J and the static image I: delta_field[i,j] = J(s(i,j))  I(i,j). The order is important, changing to delta_field[i,j] = I(i,j)  J(s(i,j)) yields the backward demons step warping the static image towards the moving, which may not be the intended behavior unless the ‘gradient_moving’ passed corresponds to the gradient of the static image
contains, at each pixel (i, j), the estimated variance (not std) of the hidden variable associated to the intensity at static[i,j] (which must have been previously quantized)
the gradient of the moving image
parameter controlling the amount of regularization. It corresponds to \(\sigma_x^2\) in algorithm 1 of Vercauteren et al.[2]
the resulting demons step will be written to this array
the demons step to be applied for updating the current displacement field
the current em energy (before applying the returned demons_step)
References
Nonrigid Multimodal Image Registration Based on the ExpectationMaximization Algorithm, (168140), 3647.
(2009). Diffeomorphic demons: efficient nonparametric image registration. NeuroImage, 45(1 Suppl), S6172. doi:10.1016/j.neuroimage.2008.10.040
Demons step for EM metric in 3D
Computes the demons step [Vercauteren09] for SSDdriven registration ( eq. 4 in [Vercauteren09] ) using the EM algorithm [Arce14] to handle multimodality images.
In this case, \(\sigma_i\) in eq. 4 of [Vercauteren09] is estimated using the EM algorithm, while in the original version of diffeomorphic demons it is estimated by the difference between the image values at each pixel.
contains, at each pixel, the difference between the moving image (warped under the current deformation s ) J and the static image I: delta_field[k,i,j] = J(s(k,i,j))  I(k,i,j). The order is important, changing to delta_field[k,i,j] = I(k,i,j)  J(s(k,i,j)) yields the backward demons step warping the static image towards the moving, which may not be the intended behavior unless the ‘gradient_moving’ passed corresponds to the gradient of the static image
contains, at each pixel (k, i, j), the estimated variance (not std) of the hidden variable associated to the intensity at static[k,i,j] (which must have been previously quantized)
the gradient of the moving image
parameter controlling the amount of regularization. It corresponds to \(\sigma_x^2\) in algorithm 1 of Vercauteren et al.[2].
the resulting demons step will be written to this array
the demons step to be applied for updating the current displacement field
the current em energy (before applying the returned demons_step)
References
Nonrigid Multimodal Image Registration Based on the ExpectationMaximization Algorithm, (168140), 3647.
(2009). Diffeomorphic demons: efficient nonparametric image registration. NeuroImage, 45(1 Suppl), S6172. doi:10.1016/j.neuroimage.2008.10.040
Computes the mean and std. for each quantization level.
Computes the mean and standard deviation of the intensities in ‘v’ for each corresponding label in ‘labels’. In other words, for each label L, it computes the mean and standard deviation of the intensities in ‘v’ at pixels whose label in ‘labels’ is L. This is used by the EM metric to compute statistics for each hidden variable represented by the labels.
the mask of pixels that will be taken into account for computing the statistics. All zero pixels in mask will be ignored
the image which the statistics will be computed from
the number of different labels in ‘labels’ (equal to the number of hidden variables in the EM metric)
the label assigned to each pixel
means[i], 0<=i<num_labels will be the mean intensity in v of all voxels labeled i, or 0 if no voxels are labeled i
variances[i], 0<=i<num_labels will be the standard deviation of the intensities in v of all voxels labeled i, or infinite if less than 2 voxels are labeled i.
Computes the mean and std. for each quantization level.
Computes the mean and standard deviation of the intensities in ‘v’ for each corresponding label in ‘labels’. In other words, for each label L, it computes the mean and standard deviation of the intensities in ‘v’ at voxels whose label in ‘labels’ is L. This is used by the EM metric to compute statistics for each hidden variable represented by the labels.
the mask of voxels that will be taken into account for computing the statistics. All zero voxels in mask will be ignored
the volume which the statistics will be computed from
the number of different labels in ‘labels’ (equal to the number of hidden variables in the EM metric)
the label assigned to each pixel
means[i], 0<=i<num_labels will be the mean intensity in v of all voxels labeled i, or 0 if no voxels are labeled i
variances[i], 0<=i<num_labels will be the standard deviation of the intensities in v of all voxels labeled i, or infinite if less than 2 voxels are labeled i.
Quantizes a 2D image to num_levels quantization levels
Quantizes the input image at num_levels intensity levels considering <=0 as a special value. Those input pixels <=0, and only those, will be assigned a quantization level of 0. The positive values are divided into the remaining num_levels1 uniform quantization levels.
The following are undefined, and raise a ValueError: * Quantizing at zero levels because at least one level must be assigned * Quantizing at one level because positive values should be assigned a
level different from the secial level 0 (at least 2 levels are needed)
the image to be quantized
the number of levels
the quantized image
the quantization values: levels[0]=0, and levels[i] is the midpoint of the interval of intensities that are assigned to quantization level i, i=1, …, num_levels1.
histogram: the number of pixels that were assigned to each quantization level
Quantizes a 3D volume to num_levels quantization levels
Quantizes the input volume at num_levels intensity levels considering <=0 as a special value. Those input voxels <=0, and only those, will be assigned a quantization level of 0. The positive values are divided into the remaining num_levels1 uniform quantization levels.
The following are undefined, and raise a ValueError: * Quantizing at zero levels because at least one level must be assigned * Quantizing at one level because positive values should be assigned a
level different from the secial level 0 (at least 2 levels are needed)
the volume to be quantized
the number of levels
the quantized volume
the quantization values: levels[0]=0, and levels[i] is the midpoint of the interval of intensities that are assigned to quantization level i, i=1, …, num_levels1.
histogram: the number of voxels that were assigned to each quantization level
AffineInvalidValuesError
Bases: Exception
Methods

Exception.with_traceback(tb)  set self.__traceback__ to tb and return self. 
AffineInversionError
Bases: Exception
Methods

Exception.with_traceback(tb)  set self.__traceback__ to tb and return self. 
AffineMap
Bases: object
Methods
Return the value of the transformation, not a reference. 


Set the affine transform (operating in physical space). 

Transform the input image from codomain to domain space. 

Transform the input image from domain to codomain space. 
AffineMap.
Implements an affine transformation whose domain is given by domain_grid and domain_grid2world, and whose codomain is given by codomain_grid and codomain_grid2world.
The actual transform is represented by the affine matrix, which operate in world coordinates. Therefore, to transform a moving image towards a static image, we first map each voxel (i,j,k) of the static image to world coordinates (x,y,z) by applying domain_grid2world. Then we apply the affine transform to (x,y,z) obtaining (x’, y’, z’) in moving image’s world coordinates. Finally, (x’, y’, z’) is mapped to voxel coordinates (i’, j’, k’) in the moving image by multiplying (x’, y’, z’) by the inverse of codomain_grid2world. The codomain_grid_shape is used analogously to transform the static image towards the moving image when calling transform_inverse.
If the domain/codomain information is not provided (None) then the sampling information needs to be specified each time the transform or transform_inverse is called to transform images. Note that such sampling information is not necessary to transform points defined in physical space, such as stream lines.
the matrix defining the affine transform, where dim is the dimension of the space this map operates in (2 for 2D images, 3 for 3D images). If None, then self represents the identity transformation.
the shape of the default domain sampling grid. When transform is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None, then the sampling grid shape must be specified each time the transform method is called.
the gridtoworld transform associated with the domain grid. If None (the default), then the gridtoworld transform is assumed to be the identity.
the shape of the default codomain sampling grid. When transform_inverse is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None (the default), then the sampling grid shape must be specified each time the transform_inverse method is called.
the gridtoworld transform associated with the codomain grid. If None (the default), then the gridtoworld transform is assumed to be the identity.
Return the value of the transformation, not a reference.
Copy of the transform, not a reference.
Set the affine transform (operating in physical space).
Also sets self.affine_inv  the inverse of affine, or None if there is no inverse.
the matrix representing the affine transform operating in physical space. The domain and codomain information remains unchanged. If None, then self represents the identity transformation.
Transform the input image from codomain to domain space.
By default, the transformed image is sampled at a grid defined by self.domain_shape and self.domain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.
the image to be transformed
the type of interpolation to be used, either ‘linear’ (for klinear interpolation) or ‘nearest’ for nearest neighbor
the gridtoworld transform associated with image. If None (the default), then the gridtoworld transform is assumed to be the identity.
the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).
the gridtoworld transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the gridtoworld transform is assumed to be the identity.
If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just resampled on the domain grid of this transform.
self.codomain_shape
the transformed image, sampled at the requested grid
Transform the input image from domain to codomain space.
By default, the transformed image is sampled at a grid defined by self.codomain_shape and self.codomain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.
the image to be transformed
the type of interpolation to be used, either ‘linear’ (for klinear interpolation) or ‘nearest’ for nearest neighbor
the gridtoworld transform associated with image. If None (the default), then the gridtoworld transform is assumed to be the identity.
the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).
the gridtoworld transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the gridtoworld transform is assumed to be the identity.
If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just resampled on the domain grid of this transform.
self.codomain_shape
the transformed image, sampled at the requested grid
AffineRegistration
Bases: object
Methods

Start the optimization process. 
Initialize an instance of the AffineRegistration class.
an instance of a metric. The default is None, implying the Mutual Information metric with default settings.
the number of iterations at each scale of the scale space. level_iters[0] corresponds to the coarsest scale, level_iters[1] the finest, where n is the length of the sequence. By default, a 3level scale space with iterations sequence equal to [10000, 1000, 100] will be used.
custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0].
custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1].
optimization method to be used. If Scipy version < 0.12, then only LBFGSB is available. Otherwise, method can be any gradientbased method available in dipy.core.Optimize: CG, BFGS, NewtonCG, dogleg or trustncg. The default is ‘LBFGSB’.
If None, this parameter is not used and an isotropic scale space with the given factors and sigmas will be built. If not None, an anisotropic scale space will be used by automatically selecting the smoothing sigmas along each axis according to the voxel dimensions of the given image. The ss_sigma_factor is used to scale the automatically computed sigmas. For example, in the isotropic case, the sigma of the kernel will be \(factor * (2 ^ i)\) where \(i = 1, 2, ..., n_scales  1\) is the scale (the finest resolution image \(i=0\) is never smoothed). The default is None.
extra optimization options. The default is None, implying no extra options are passed to the optimizer.
Set the verbosity level of the algorithm: 0 : do not print anything 1 : print information about the current status of the algorithm 2 : print high level information of the components involved in
the registration that can be used to detect a failing component.
of a bug.
Default: 1
Start the optimization process.
the image to be used as reference during optimization.
the image to be used as “moving” during optimization. It is necessary to prealign the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “prealigning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix
the transformation with respect to whose parameters the gradient must be computed
parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.
the voxeltospace transformation associated with the static image. The default is None, implying the transform is the identity.
the voxeltospace transformation associated with the moving image. The default is None, implying the transform is the identity.
‘mass’: align centers of gravity ‘voxelorigin’: align physical coordinates of voxel (0,0,0) ‘centers’: align physical coordinates of central voxels
array, shape (dim+1, dim+1).
Start from identity.
The default is None.
if True, it returns the parameters for measuring the similarity between the images (default ‘False’). The metric containing optimal parameters and the distance between the images.
static image mask that defines which pixels in the static image are used to calculate the mutual information.
moving image mask that defines which pixels in the moving image are used to calculate the mutual information.
the affine resulting affine transformation
the optimal parameters (translation, rotation shear etc.)
the value of the function at the optimal parameters.
IsotropicScaleSpace
Bases: dipy.align.scalespace.ScaleSpace
Methods

Voxeltospace transformation at a given level. 

Spacetovoxel transformation at a given level. 

Shape the subsampled image must have at a particular level. 

Ratio of voxel size from pyramid level from_level to to_level. 

Smoothed image at a given level. 

Adjustment factor for inputspacing to reflect voxel sizes at level. 

Smoothing parameters used at a given level. 

Spacings the subsampled image must have at a particular level. 

Prints properties of a pyramid level. 
IsotropicScaleSpace.
Computes the Scale Space representation of an image using isotropic smoothing kernels for all scales. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with different smoothing parameters.
This specialization of ScaleSpace allows the user to provide custom scale and smoothing factors for all scales.
slices, r is the number of rows and c is the number of columns of the input image.
custom scale factors to build the scale space (one factor for each scale).
custom smoothing parameter to build the scale space (one parameter for each scale).
the gridtospace transform of the image grid. The default is the identity matrix.
the spacing (voxel size) between voxels in physical space. The default if 1.0 along all axes.
if True, all smoothed images will be zero at all voxels that are zero in the input image. The default is False.
MutualInformationMetric
Bases: object
Methods

Numeric value of the negative Mutual Information. 

Numeric value of the metric and its gradient at given parameters. 

Numeric value of the metric's gradient at the given parameters. 

Prepare the metric to compute intensity densities and gradients. 
Initialize an instance of the Mutual Information metric.
This class implements the methods required by Optimizer to drive the registration process.
the number of bins to be used for computing the intensity histograms. The default is 32.
There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If sampling_proportion is None, then dense sampling is used. If sampling_proportion is a floating point value in (0,1] then sparse sampling is used, where sampling_proportion specifies the proportion of voxels to be used. The default is None.
Notes
Since we use linear interpolation, images are not, in general, differentiable at exact voxel coordinates, but they are differentiable between voxel coordinates. When using sparse sampling, selected voxels are slightly moved by adding a small random displacement within one voxel to prevent sampling points from being located exactly at voxel coordinates. When using dense sampling, this random displacement is not applied.
Numeric value of the negative Mutual Information.
We need to change the sign so we can use standard minimization algorithms.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters
Numeric value of the metric and its gradient at given parameters.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters
the gradient of the negative Mutual Information
Numeric value of the metric’s gradient at the given parameters.
the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform
the gradient of the negative Mutual Information
Prepare the metric to compute intensity densities and gradients.
The histograms will be setup to compute probability densities of intensities within the minimum and maximum values of static and moving
the transformation with respect to whose parameters the gradient must be computed
static image
moving image. The dimensions of the static (S, R, C) and moving (S’, R’, C’) images do not need to be the same.
the gridtospace transform of the static image. The default is None, implying the transform is the identity.
the gridtospace transform of the moving image. The default is None, implying the spacing along all axes is 1.
the prealigning matrix (an affine transform) that roughly aligns the moving image towards the static image. If None, no prealignment is performed. If a prealignment matrix is available, it is recommended to provide this matrix as starting_affine instead of manually transforming the moving image to reduce interpolation artifacts. The default is None, implying no prealignment is performed.
static image mask that defines which pixels in the static image are used to calculate the mutual information.
moving image mask that defines which pixels in the moving image are used to calculate the mutual information.
Optimizer
Bases: object
Methods
print_summary 
A class for handling minimization of scalar function of one or more variables.
Objective function.
Initial guess.
Extra arguments passed to the objective function and its derivatives (Jacobian, Hessian).
Type of solver. Should be one of
‘NelderMead’
‘Powell’
‘CG’
‘BFGS’
‘NewtonCG’
‘Anneal’
‘LBFGSB’
‘TNC’
‘COBYLA’
‘SLSQP’
‘dogleg’
‘trustncg’
Jacobian of objective function. Only for CG, BFGS, NewtonCG, dogleg, trustncg. If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of the objective. In this case, it must accept the same arguments as fun.
Hessian of objective function or Hessian of objective function times an arbitrary vector p. Only for NewtonCG, dogleg, trustncg. Only one of hessp or hess needs to be given. If hess is provided, then hessp will be ignored. If neither hess nor hessp is provided, then the hessian product will be approximated using finite differences on jac. hessp must compute the Hessian times an arbitrary vector.
Bounds for variables (only for LBFGSB, TNC and SLSQP).
(min, max)
pairs for each element in x
, defining
the bounds on that parameter. Use None for one of min
or
max
when there is no bound in that direction.
Constraints definition (only for COBYLA and SLSQP). Each constraint is defined in a dictionary with fields:
 typestr
Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.
 funcallable
The function defining the constraint.
 jaccallable, optional
The Jacobian of fun (only for SLSQP).
 argssequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be nonnegative. Note that COBYLA only supports inequality constraints.
Tolerance for termination. For detailed control, use solverspecific options.
Called after each iteration, as callback(xk)
, where xk
is
the current parameter vector. Only available using Scipy >= 0.12.
A dictionary of solver options. All methods accept the following generic options:
 maxiterint
Maximum number of iterations to perform.
 dispbool
Set to True to print convergence messages.
For methodspecific options, see show_options(‘minimize’, method).
save history of x for each iteration. Only available using Scipy >= 0.12.
See also
scipy.optimize.minimize
ParzenJointHistogram
Bases: object
Methods

Bin index associated with the given normalized intensity 
Maps intensity x to the range covered by the moving histogram 

Maps intensity x to the range covered by the static histogram 


Compute histogram settings to store the PDF of input images 

Computes the Gradient of the joint PDF w.r.t. 

Computes the Gradient of the joint PDF w.r.t. 

Computes the Probability Density Functions of two images 

Computes the Probability Density Functions from a set of samples 
Computes joint histogram and derivatives with Parzen windows
Base class to compute joint and marginal probability density functions and their derivatives with respect to a transform’s parameters. The smooth histograms are computed by using Parzen windows [Parzen62] with a cubic spline kernel, as proposed by Mattes et al. [Mattes03]. This implementation is not tied to any optimization (registration) method, the idea is that informationtheoretic matching functionals (such as Mutual Information) can inherit from this class to perform the lowlevel computations of the joint intensity distributions and its gradient w.r.t. the transform parameters. The derived class can then compute the similarity/dissimilarity measure and gradient, and finally communicate the results to the appropriate optimizer.
the number of bins of the joint and marginal probability density functions (the actual number of bins of the joint PDF is nbins**2)
Notes
We need this class in cython to allow _joint_pdf_gradient_dense_2d and _joint_pdf_gradient_dense_3d to use a nogil Jacobian function (obtained from an instance of the Transform class), which allows us to evaluate Jacobians at all the sampling points (maybe the full grid) inside a nogil loop.
The reason we need a class is to encapsulate all the parameters related to the joint and marginal distributions.
References
function and the mode. Annals of Mathematical Statistics, 33(3), 10651076, 1962.
& Eubank, W. PETCT image registration in the chest using freeform deformations. IEEE Transactions on Medical Imaging, 22(1), 1208, 2003.
Bin index associated with the given normalized intensity
The return value is an integer in [padding, nbins  1  padding]
intensity value normalized to the range covered by the histogram
the bin index associated with the given normalized intensity
Maps intensity x to the range covered by the moving histogram
If the input intensity is in [self.mmin, self.mmax] then the normalized intensity will be in [self.padding, self.nbins  self.padding]
the intensity to be normalized
normalized intensity to the range covered by the moving histogram
Maps intensity x to the range covered by the static histogram
If the input intensity is in [self.smin, self.smax] then the normalized intensity will be in [self.padding, self.nbins  self.padding]
the intensity to be normalized
normalized intensity to the range covered by the static histogram
Compute histogram settings to store the PDF of input images
static image
moving image
mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, the behaviour is equivalent to smask=ones_like(static)
mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, the behaviour is equivalent to mmask=ones_like(static)
Computes the Gradient of the joint PDF w.r.t. transform parameters
Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter.
The gradient is stored in self.joint_grad.
parameters of the transformation to compute the gradient from
the transformation with respect to whose parameters the gradient must be computed
static image
moving image
we assume that both images have already been sampled at a common grid. This transform must map voxel coordinates of this common grid to physical coordinates of its corresponding voxel in the moving image. For example, if the moving image was sampled on the static image’s grid (this is the typical setting) using an aligning matrix A, then
grid2world = A.dot(static_affine)
where static_affine is the transformation mapping static image’s grid coordinates to physical space.
the gradient of the moving image
mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). The default is None, indicating all voxels are considered.
mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). The default is None, indicating all voxels are considered.
Computes the Gradient of the joint PDF w.r.t. transform parameters
Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter.
The list of intensities sval and mval are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting gradient corresponds to the paired intensities according to the alignment at the moment the images were sampled.
The gradient is stored in self.joint_grad.
parameters to compute the gradient at
the transformation with respect to whose parameters the gradient must be computed
sampled intensities from the static image at sampled_points
sampled intensities from the moving image at sampled_points
coordinates (in physical space) of the points the images were sampled at
the gradient of the moving image at the sample points
Computes the Probability Density Functions of two images
The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively.
static image
moving image
mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, ones_like(static) is used as mask.
mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, ones_like(moving) is used as mask.
Computes the Probability Density Functions from a set of samples
The list of intensities sval and mval are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting distributions corresponds to the paired intensities according to the alignment at the moment the images were sampled.
The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively.
sampled intensities from the static image at sampled_points
sampled intensities from the moving image at sampled_points
ScaleSpace
Bases: object
Methods

Voxeltospace transformation at a given level. 

Spacetovoxel transformation at a given level. 

Shape the subsampled image must have at a particular level. 

Ratio of voxel size from pyramid level from_level to to_level. 

Smoothed image at a given level. 

Adjustment factor for inputspacing to reflect voxel sizes at level. 

Smoothing parameters used at a given level. 

Spacings the subsampled image must have at a particular level. 

Prints properties of a pyramid level. 
ScaleSpace.
Computes the Scale Space representation of an image. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with increasing smoothing parameter. If the image’s voxels are isotropic, the smoothing will be the same along all directions: at level L = 0, 1, …, the sigma is given by \(s * ( 2^L  1 )\). If the voxel dimensions are not isotropic, then the smoothing is weaker along low resolution directions.
slices, r is the number of rows and c is the number of columns of the input image.
the desired number of levels (resolutions) of the scale space
the gridtospace transform of the image grid. The default is the identity matrix
the spacing (voxel size) between voxels in physical space. The default is 1.0 along all axes
the smoothing factor to be used in the construction of the scale space. The default is 0.2
if True, all smoothed images will be zero at all voxels that are zero in the input image. The default is False.
Voxeltospace transformation at a given level.
Returns the voxeltospace transformation associated with the subsampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get affine transform from
Spacetovoxel transformation at a given level.
Returns the spacetovoxel transformation associated with the subsampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the inverse transform from
Shape the subsampled image must have at a particular level.
Returns the shape the subsampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the subsampled shape from
Ratio of voxel size from pyramid level from_level to to_level.
Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to ‘expand’ them to level b).
the resolution to expand voxels from
the resolution to expand voxels to
the expand factors (a scalar for each voxel dimension)
Smoothed image at a given level.
Returns the smoothed image at the requested level in the Scale Space.
the scale space level to get the smooth image from
Adjustment factor for inputspacing to reflect voxel sizes at level.
Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level.
the scale space level to get the scalings from
Smoothing parameters used at a given level.
Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space
the scale space level to get the smoothing parameters from
Spacings the subsampled image must have at a particular level.
Returns the spacings (voxel sizes) the subsampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the subsampled shape from
Computes the mutual information and its gradient (if requested)
the joint intensity distribution
the gradient of the joint distribution w.r.t. the transformation parameters
the marginal intensity distribution of the static image
the marginal intensity distribution of the moving image
the buffer in which to write the gradient of the mutual information. If None, the gradient is not computed
Deprecate a renamed or removed function argument.
The decorator assumes that the argument with the old_name
was removed
from the function signature and the new_name
replaced it at the
same position in the signature. If the old_name
argument is
given when calling the decorated function the decorator will catch it and
issue a deprecation warning and pass it on as new_name
argument.
The old name of the argument.
None
, optionalThe new name of the argument. Set this to None to remove the
argument old_name
instead of renaming it.
The release at which the old argument became deprecated.
Last released version at which this function will still raise a deprecation warning. Versions higher than this will raise an error.
Callable accepting string as argument, and return 1 if string
represents a higher version than encoded in the version_comparator
,
0 if the version is equal, and 1 if the version is lower. For example,
the version_comparator
may compare the input version string to the
current package version string.
If the argument is not a named argument (for example it
was meant to be consumed by **kwargs
) set this to
True
. Otherwise the decorator will throw an Exception
if the new_name
cannot be found in the signature of
the decorated function.
Default is False
.
Warning to be issued.
Error to be issued
An alternative function or class name that the user may use in
place of the deprecated object if new_name
is None. The deprecation
warning will tell the user about this alternative if provided.
If the new argument name cannot be found in the function
signature and arg_in_kwargs was False or if it is used to
deprecate the name of the *args
, **kwargs
like arguments.
At runtime such an Error is raised if both the new_name
and old_name were specified when calling the function and
“relax=False”.
Notes
This function is based on the Astropy (major modification). https://github.com/astropy/astropy. See COPYING file distributed along with the astropy package for the copyright and license terms.
Examples
The deprecation warnings are not shown in the following examples. To deprecate a positional or keyword argument:: >>> from dipy.utils.deprecator import deprecated_params >>> @deprecated_params(‘sig’, ‘sigma’, ‘0.3’) … def test(sigma): … return sigma >>> test(2) 2 >>> test(sigma=2) 2 >>> test(sig=2) # doctest: +SKIP 2
It is also possible to replace multiple arguments. The old_name
,
new_name
and since
have to be tuple or list and contain the
same number of entries::
>>> @deprecated_params([‘a’, ‘b’], [‘alpha’, ‘beta’],
… [‘0.2’, 0.4])
… def test(alpha, beta):
… return alpha, beta
>>> test(a=2, b=3) # doctest: +SKIP
(2, 3)
Extracts the rotational and spacing components from a matrix
Extracts the rotational and spacing (voxel dimensions) components from a matrix. An image gradient represents the local variation of the image’s gray values per voxel. Since we are iterating on the physical space, we need to compute the gradients as variation per millimeter, so we need to divide each gradient’s component by the voxel size along the corresponding axis, that’s what the spacings are used for. Since the image’s gradients are oriented along the grid axes, we also need to reorient the gradients to be given in physical space coordinates.
the matrix transforming grid coordinates to physical space.
the rotational component of the input matrix
the scaling component (voxel size) of the matrix
Bilinear interpolation of a 2D scalar image
Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with bilinear interpolation
the 2D image to be interpolated
(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the image at
out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image
if locations[i:] is inside the image then inside[i]=1, else inside[i]=0
Trilinear interpolation of a 3D scalar image
Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with trilinear interpolation
the 3D image to be interpolated
(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the image at
out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image
if locations[i,:] is inside the image then inside[i]=1, else inside[i]=0
Take floor(total_voxels/k) samples from a (2D or 3D) grid
The sampling is made by taking all pixels whose index (in lexicographical order) is a multiple of k. Each selected point is slightly perturbed by adding a realization of a normally distributed random variable and then mapped to physical space by the given gridtospace transform.
The lexicographical order of a pixels in a grid of shape (a, b, c) is defined by assigning to each voxel position (i, j, k) the integer index
F((i, j, k)) = i * (b * c) + j * (c) + k
and sorting increasingly by this index.
the sampling rate, as described before
the shape of the grid to be sampled
the gridtospace transform
the standard deviation of the Normal random distortion to be applied to the sampled points
the matrix whose rows are the sampled points
Examples
>>> from dipy.align.parzenhist import sample_domain_regular
>>> import dipy.align.vector_fields as vf
>>> shape = np.array((10, 10), dtype=np.int32)
>>> sigma = 0
>>> dim = len(shape)
>>> grid2world = np.eye(dim+1)
>>> n = shape[0]*shape[1]
>>> k = 2
>>> samples = sample_domain_regular(k, shape, grid2world, sigma)
>>> (samples.shape[0], samples.shape[1]) == (n//k, dim)
True
>>> isamples = np.array(samples, dtype=np.int32)
>>> indices = (isamples[:, 0] * shape[1] + isamples[:, 1])
>>> len(set(indices)) == len(indices)
True
>>> (indices%k).sum()
0
Transformation to align the center of mass of the input images.
static image
the voxeltospace transformation of the static image
moving image
the voxeltospace transformation of the moving image
the affine transformation (translation only, in this case) aligning the center of mass of the moving image towards the one of the static image
Transformation to align the geometric center of the input images.
With “geometric center” of a volume we mean the physical coordinates of its central voxel
static image
the voxeltospace transformation of the static image
moving image
the voxeltospace transformation of the moving image
the affine transformation (translation only, in this case) aligning the geometric center of the moving image towards the one of the static image
Transformation to align the origins of the input images.
With “origin” of a volume we mean the physical coordinates of voxel (0,0,0)
static image
the voxeltospace transformation of the static image
moving image
the voxeltospace transformation of the moving image
the affine transformation (translation only, in this case) aligning the origin of the moving image towards the one of the static image
Bunch
Bases: object
DiffeomorphicMap
Bases: object
Methods

Creates a zero displacement field 
Inversion error of the displacement fields 


Expands the displacement fields from current shape to new_shape 
Deformation field to transform an image in the backward direction 

Deformation field to transform an image in the forward direction 

Constructs a simplified version of this Diffeomorhic Map 


Try to interpret obj as a matrix 

Inverse of this DiffeomorphicMap instance 
Shallow copy of this DiffeomorphicMap instance 


Warps an image in the forward direction 

Warps an image in the backward direction 

Composition of this DiffeomorphicMap with a given endomorphism 
DiffeomorphicMap
Implements a diffeomorphic transformation on the physical space. The deformation fields encoding the direct and inverse transformations share the same domain discretization (both the discretization grid shape and voxeltospace matrix). The input coordinates (physical coordinates) are first aligned using prealign, and then displaced using the corresponding vector field interpolated at the aligned coordinates.
the transformation’s dimension
the number of slices (if 3D), rows and columns of the deformation field’s discretization
grid and space
the number of slices (if 3D), rows and columns of the default discretization of this map’s domain
the default voxeltospace transformation between this map’s discretization and physical space
the number of slices (if 3D), rows and columns of the images that are ‘normally’ warped using this transformation in the forward direction (this will provide default transformation parameters to warp images under this transformation). By default, we assume that the inverse transformation is ‘normally’ used to warp images with the same discretization and voxeltospace transformation as the deformation field grid.
the voxeltospace transformation of images that are ‘normally’ warped using this transformation (in the forward direction).
the linear transformation to be applied to align input images to the reference space before warping under the deformation field.
Creates a zero displacement field
Creates a zero displacement field (the identity transformation).
Inversion error of the displacement fields
Estimates the inversion error of the displacement fields by computing statistics of the residual vectors obtained after composing the forward and backward displacement fields.
the displacement field resulting from composing the forward and backward displacement fields of this transformation (the residual should be zero for a perfect diffeomorphism)
statistics from the norms of the vectors of the residual displacement field: maximum, mean and standard deviation
Notes
Since the forward and backward displacement fields have the same discretization, the final composition is given by
comp[i] = forward[ i + Dinv * backward[i]]
where Dinv is the spacetogrid transformation of the displacement fields
Expands the displacement fields from current shape to new_shape
Upsamples the discretization of the displacement fields to be of new_shape shape.
the factors scaling current spacings (voxel sizes) to spacings in the expanded discretization.
the shape of the arrays holding the upsampled discretization
Deformation field to transform an image in the backward direction
Returns the deformation field that must be used to warp an image under this transformation in the backward direction (note the ‘is_inverse’ flag).
Deformation field to transform an image in the forward direction
Returns the deformation field that must be used to warp an image under this transformation in the forward direction (note the ‘is_inverse’ flag).
Constructs a simplified version of this Diffeomorhic Map
The simplified version incorporates the prealign transform, as well as the domain and codomain affine transforms into the displacement field. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. As a result, self.prealign, self.disp_grid2world, self.domain_grid2world and self.codomain affine will be None (denoting Identity) in the resulting diffeomorphic map.
Try to interpret obj as a matrix
Some operations are performed faster if we know in advance if a matrix is the identity (so we can skip the actual matrixvector multiplication). This function returns None if the given object is None or the ‘identity’ string. It returns the same object if it is a numpy array. It raises an exception otherwise.
any object
the same object given as argument if obj is None or a numpy array. None if obj is the ‘identity’ string.
Inverse of this DiffeomorphicMap instance
Returns a diffeomorphic map object representing the inverse of this transformation. The internal arrays are not copied but just referenced.
the inverse of this diffeomorphic map.
Shallow copy of this DiffeomorphicMap instance
Creates a shallow copy of this diffeomorphic map (the arrays are not copied but just referenced)
the shallow copy of this diffeomorphic map
Warps an image in the forward direction
Transforms the input image under this transformation in the forward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform(…) warps the image forwards, else it warps the image backwards).
the image to be warped under this transformation in the forward direction
the type of interpolation to be used for warping, either ‘linear’ (for klinear interpolation) or ‘nearest’ for nearest neighbor
the transformation bringing world (space) coordinates to voxel coordinates of the image given as input
the number of slices, rows and columns of the desired warped image
warped image to physical space
the warped image under this transformation in the forward direction
Notes
See _warp_forward and _warp_backward documentation for further information.
Warps an image in the backward direction
Transforms the input image under this transformation in the backward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform_inverse(…) warps the image backwards, else it warps the image forwards)
the image to be warped under this transformation in the forward direction
the type of interpolation to be used for warping, either ‘linear’ (for klinear interpolation) or ‘nearest’ for nearest neighbor
the transformation bringing world (space) coordinates to voxel coordinates of the image given as input
the number of slices, rows, and columns of the desired warped image
warped image to physical space
warped image under this transformation in the backward direction
Notes
See _warp_forward and _warp_backward documentation for further information.
Composition of this DiffeomorphicMap with a given endomorphism
Creates a new DiffeomorphicMap C with the same properties as self and composes its displacement fields with phi’s corresponding fields. The resulting diffeomorphism is of the form C(x) = phi(self(x)) with inverse C^{1}(y) = self^{1}(phi^{1}(y)). We assume that phi is an endomorphism with the same discretization and domain affine as self to ensure that the composition inherits self’s properties (we also assume that the prealigning matrix of phi is None or identity).
the endomorphism to be warped by this diffeomorphic map
endomorphism given as input
Notes
The problem with our current representation of a DiffeomorphicMap is that the set of Diffeomorphism that can be represented this way (a prealigning matrix followed by a nonlinear endomorphism given as a displacement field) is not closed under the composition operation.
Supporting a general DiffeomorphicMap class, closed under composition, may be extremely costly computationally, and the kind of transformations we actually need for Avants’ midpoint algorithm (SyN) are much simpler.
DiffeomorphicRegistration
Bases: object
Methods

Returns the resulting diffeomorphic map after optimization 

Starts the metric optimization 

Sets the number of iterations at each pyramid level 
Diffeomorphic Registration
This abstract class defines the interface to be implemented by any optimization algorithm for diffeomorphic registration.
the object measuring the similarity of the two images. The registration algorithm will minimize (or maximize) the provided similarity.
Starts the metric optimization
This is the main function each specialized class derived from this must implement. Upon completion, the deformation field must be available from the forward transformation model.
Sets the number of iterations at each pyramid level
Establishes the maximum number of iterations to be performed at each level of the Gaussian pyramid, similar to ANTS.
the number of iterations at each level of the Gaussian pyramid. level_iters[0] corresponds to the finest level, level_iters[n1] the coarsest, where n is the length of the list
ScaleSpace
Bases: object
Methods

Voxeltospace transformation at a given level. 

Spacetovoxel transformation at a given level. 

Shape the subsampled image must have at a particular level. 

Ratio of voxel size from pyramid level from_level to to_level. 

Smoothed image at a given level. 

Adjustment factor for inputspacing to reflect voxel sizes at level. 

Smoothing parameters used at a given level. 

Spacings the subsampled image must have at a particular level. 

Prints properties of a pyramid level. 
ScaleSpace.
Computes the Scale Space representation of an image. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with increasing smoothing parameter. If the image’s voxels are isotropic, the smoothing will be the same along all directions: at level L = 0, 1, …, the sigma is given by \(s * ( 2^L  1 )\). If the voxel dimensions are not isotropic, then the smoothing is weaker along low resolution directions.
slices, r is the number of rows and c is the number of columns of the input image.
the desired number of levels (resolutions) of the scale space
the gridtospace transform of the image grid. The default is the identity matrix
the spacing (voxel size) between voxels in physical space. The default is 1.0 along all axes
the smoothing factor to be used in the construction of the scale space. The default is 0.2
if True, all smoothed images will be zero at all voxels that are zero in the input image. The default is False.
Voxeltospace transformation at a given level.
Returns the voxeltospace transformation associated with the subsampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get affine transform from
Spacetovoxel transformation at a given level.
Returns the spacetovoxel transformation associated with the subsampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the inverse transform from
Shape the subsampled image must have at a particular level.
Returns the shape the subsampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the subsampled shape from
Ratio of voxel size from pyramid level from_level to to_level.
Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to ‘expand’ them to level b).
the resolution to expand voxels from
the resolution to expand voxels to
the expand factors (a scalar for each voxel dimension)
Smoothed image at a given level.
Returns the smoothed image at the requested level in the Scale Space.
the scale space level to get the smooth image from
Adjustment factor for inputspacing to reflect voxel sizes at level.
Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level.
the scale space level to get the scalings from
Smoothing parameters used at a given level.
Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space
the scale space level to get the smoothing parameters from
Spacings the subsampled image must have at a particular level.
Returns the spacings (voxel sizes) the subsampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the subsampled shape from
SymmetricDiffeomorphicRegistration
Bases: dipy.align.imwarp.DiffeomorphicRegistration
Methods

Return the resulting diffeomorphic map. 

Starts the optimization 

Sets the number of iterations at each pyramid level 

Composition of the current displacement field with the given field 
Symmetric Diffeomorphic Registration (SyN) Algorithm
Performs the multiresolution optimization algorithm for nonlinear registration using a given similarity metric.
the metric to be optimized
the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used)
the optimization will stop when the estimated derivative of the energy profile w.r.t. time falls below this threshold
the number of iterations to be performed by the displacement field inversion algorithm
the length of the maximum displacement vector of the update displacement field at each iteration
parameter of the scalespace smoothing kernel. For example, the std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, …, n_scales is the scale
the displacement field inversion algorithm will stop iterating when the inversion error falls below this threshold
a function receiving a SymmetricDiffeomorphicRegistration object to be called after each iteration (this optimizer will call this function passing self as parameter)
Return the resulting diffeomorphic map.
Returns the DiffeomorphicMap registering the moving image towards the static image.
Starts the optimization
the image to be used as reference during optimization. The displacement fields will have the same discretization as the static image.
the image to be used as “moving” during optimization. Since the deformation fields’ discretization is the same as the static image, it is necessary to prealign the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “prealigning” the moving image towards the static using an affine transformation given by the ‘prealign’ matrix
the voxeltospace transformation associated to the static image
the voxeltospace transformation associated to the moving image
the affine transformation (operating on the physical space) prealigning the moving image towards the static
the diffeomorphic map that brings the moving image towards the static one in the forward direction (i.e. by calling static_to_ref.transform) and the static image towards the moving one in the backward direction (i.e. by calling static_to_ref.transform_inverse).
Composition of the current displacement field with the given field
Interpolates new displacement at the locations defined by current_displacement. Equivalently, computes the composition C of the given displacement fields as C(x) = B(A(x)), where A is current_displacement and B is new_displacement. This function is intended to be used with deformation fields of the same sampling (e.g. to be called by a registration algorithm).
the displacement field defining where to interpolate new_displacement
the displacement field to be warped by current_displacement
the spacetogrid transform associated with the displacements’ grid (we assume that both displacements are discretized over the same grid)
scaling factor applied to d2. The effect may be interpreted as moving d1 displacements along a factor (time_scaling) of d2.
the warped displacement field
floating
Extracts the rotational and spacing components from a matrix
Extracts the rotational and spacing (voxel dimensions) components from a matrix. An image gradient represents the local variation of the image’s gray values per voxel. Since we are iterating on the physical space, we need to compute the gradients as variation per millimeter, so we need to divide each gradient’s component by the voxel size along the corresponding axis, that’s what the spacings are used for. Since the image’s gradients are oriented along the grid axes, we also need to reorient the gradients to be given in physical space coordinates.
the matrix transforming grid coordinates to physical space.
the rotational component of the input matrix
the scaling component (voxel size) of the matrix
Returns the matrix product A.dot(B) considering None as the identity
CCMetric
Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. 

Computes one step bringing the moving image towards the static. 

Frees the resources allocated during initialization 

Numerical value assigned by this metric to the current image pair 

Prepares the metric to compute one displacement field iteration. 


Informs the metric how many pyramid levels are above the current one 

Informs the metric how many pyramid levels are below the current one 

Sets the moving image being compared against the static one. 

Sets the static image being compared against the moving one. 

This is called by the optimizer just after setting the moving image 

This is called by the optimizer just after setting the static image. 
Normalized CrossCorrelation Similarity metric.
the dimension of the image domain
be applied to the update field at each iteration
the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation
Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
Computes one step bringing the moving image towards the static.
Computes the update displacement field to be used for registration of the moving image towards the static image
Numerical value assigned by this metric to the current image pair
Returns the Cross Correlation (data term) energy computed at the largest iteration
Prepares the metric to compute one displacement field iteration.
Precomputes the crosscorrelation factors for efficient computation of the gradient of the Cross Correlation w.r.t. the displacement field. It also precomputes the image gradients in the physical space by reorienting the gradients in the voxel space using the corresponding affine transformations.
EMMetric
Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. 


Demons step for EM metric 
Computes one step bringing the reference image towards the static. 


Computes the GaussNewton energy minimization step 
Frees the resources allocated during initialization 

The numerical value assigned by this metric to the current image pair 

Prepares the metric to compute one displacement field iteration. 


Informs the metric how many pyramid levels are above the current one 

Informs the metric how many pyramid levels are below the current one 

Sets the moving image being compared against the static one. 

Sets the static image being compared against the moving one. 
This is called by the optimizer just after setting the moving image. 

This is called by the optimizer just after setting the static image. 
ExpectationMaximization Metric
Similarity metric based on the ExpectationMaximization algorithm to handle multimodal images. The transfer function is modeled as a set of hidden random variables that are estimated at each iteration of the algorithm.
the dimension of the image domain
smoothness parameter, the larger the value the smoother the deformation field
number of iterations to be performed at each level of the multi resolution GaussSeidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric)
variables in the EM algorithm)
if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image.
the optimization schedule to be used in the multiresolution GaussSeidel optimization algorithm (not used if Demons Step is selected)
Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
Demons step for EM metric
if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Demons step
Computes one step bringing the reference image towards the static.
Computes the forward update field to register the moving image towards the static image in a gradientbased optimization algorithm
Computes the GaussNewton energy minimization step
Computes the Newton step to minimize this energy, i.e., minimizes the linearized energy function with respect to the regularized displacement field (this step does not require postsmoothing, as opposed to the demons step, which does not include regularization). To accelerate convergence we use the multigrid GaussSeidel algorithm proposed by Bruhn and Weickert et al [Bruhn05]
if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Newton step
References
estimation: combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
The numerical value assigned by this metric to the current image pair
Returns the EM (data term) energy computed at the largest iteration
Prepares the metric to compute one displacement field iteration.
Precomputes the transfer functions (hidden random variables) and variances of the estimators. Also precomputes the gradient of both input images. Note that once the images are transformed to the opposite modality, the gradient of the transformed images can be used with the gradient of the corresponding modality in the same fashion as diffdemons does for monomodality images. If the flag self.use_double_gradient is True these gradients are averaged.
This is called by the optimizer just after setting the moving image.
EMMetric takes advantage of the image dynamics by computing the current moving image mask from the original_moving_image mask (warped by nearest neighbor interpolation)
the original moving image from which the current moving image was generated, the current moving image is the one that was provided via ‘set_moving_image(…)’, which may not be the same as the original moving image but a warped version of it.
the transformation that was applied to the original_moving_image to generate the current moving image
This is called by the optimizer just after setting the static image.
EMMetric takes advantage of the image dynamics by computing the current static image mask from the originalstaticImage mask (warped by nearest neighbor interpolation)
the original static image from which the current static image was generated, the current static image is the one that was provided via ‘set_static_image(…)’, which may not be the same as the original static image but a warped version of it (even the static image changes during Symmetric Normalization, not only the moving one).
the transformation that was applied to the original_static_image to generate the current static image
SSDMetric
Bases: dipy.align.metrics.SimilarityMetric
Methods
Computes one step bringing the static image towards the moving. 


Demons step for SSD metric 
Computes one step bringing the reference image towards the static. 


Computes the GaussNewton energy minimization step 
Nothing to free for the SSD metric 

The numerical value assigned by this metric to the current image pair 

Prepares the metric to compute one displacement field iteration. 


Informs the metric how many pyramid levels are above the current one 

Informs the metric how many pyramid levels are below the current one 

Sets the moving image being compared against the static one. 

Sets the static image being compared against the moving one. 

This is called by the optimizer just after setting the moving image 

This is called by the optimizer just after setting the static image. 
Sum of Squared Differences (SSD) Metric
Similarity metric for (monomodal) nonlinear image registration defined by the sum of squared differences (SSD)
the dimension of the image domain
smoothness parameter, the larger the value the smoother the deformation field
number of iterations to be performed at each level of the multi resolution GaussSeidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric)
the displacement field step to be computed when ‘compute_forward’ and ‘compute_backward’ are called. Either ‘demons’ or ‘gauss_newton’
Computes one step bringing the static image towards the moving.
Computes the updated displacement field to be used for registration of the static image towards the moving image
Demons step for SSD metric
Computes the demons step proposed by Vercauteren et al.[Vercauteren09] for the SSD metric.
if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
the Demons step
References
Nicholas Ayache, “Diffeomorphic Demons: Efficient Nonparametric Image Registration”, Neuroimage 2009
Computes one step bringing the reference image towards the static.
Computes the update displacement field to be used for registration of the moving image towards the static image
Computes the GaussNewton energy minimization step
Minimizes the linearized energy function (Newton step) defined by the sum of squared differences of corresponding pixels of the input images with respect to the displacement field.
if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)
if forward_step==True, the forward SSD GaussNewton step, else, the backward step
SimilarityMetric
Bases: object
Methods
Computes one step bringing the static image towards the moving. 

Computes one step bringing the reference image towards the static. 

Releases the resources no longer needed by the metric 

Numerical value assigned by this metric to the current image pair 

Prepares the metric to compute one displacement field iteration. 


Informs the metric how many pyramid levels are above the current one 

Informs the metric how many pyramid levels are below the current one 

Sets the moving image being compared against the static one. 

Sets the static image being compared against the moving one. 
This is called by the optimizer just after setting the moving image 

This is called by the optimizer just after setting the static image. 
Similarity Metric abstract class
A similarity metric is in charge of keeping track of the numerical value of the similarity (or distance) between the two given images. It also computes the update field for the forward and inverse displacement fields to be used in a gradientbased optimization algorithm. Note that this metric does not depend on any transformation (affine or nonlinear) so it assumes the static and moving images are already warped
the dimension of the image domain
Computes one step bringing the static image towards the moving.
Computes the backward update field to register the static image towards the moving image in a gradientbased optimization algorithm
Computes one step bringing the reference image towards the static.
Computes the forward update field to register the moving image towards the static image in a gradientbased optimization algorithm
Releases the resources no longer needed by the metric
This method is called by the RegistrationOptimizer after the required iterations have been computed (forward and / or backward) so that the SimilarityMetric can safely delete any data it computed as part of the initialization
Numerical value assigned by this metric to the current image pair
Must return the numeric value of the similarity between the given static and moving images
Prepares the metric to compute one displacement field iteration.
This method will be called before any compute_forward or compute_backward call, this allows the Metric to precompute any useful information for speeding up the update computations. This initialization was needed in ANTS because the updates are called once per voxel. In Python this is unpractical, though.
Informs the metric how many pyramid levels are above the current one
Informs this metric the number of pyramid levels above the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly
the number of levels above the current Gaussian Pyramid level
Informs the metric how many pyramid levels are below the current one
Informs this metric the number of pyramid levels below the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly
the number of levels below the current Gaussian Pyramid level
Sets the moving image being compared against the static one.
Sets the moving image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations
the moving image
Sets the static image being compared against the moving one.
Sets the static image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations
the static image
This is called by the optimizer just after setting the moving image
This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_moving_image equals self.moving_image.
original image from which the current moving image was generated
the transformation that was applied to the original image to generate the current moving image
This is called by the optimizer just after setting the static image.
This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_static_image equals self.moving_image.
original image from which the current static image was generated
the transformation that was applied to original image to generate the current static image
floating
Return the gradient of an Ndimensional array.
The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate onesides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.
An Ndimensional array containing samples of a scalar function.
Spacing between f values. Default unitary spacing for all dimensions. Spacing can be specified using:
single scalar to specify a sample distance for all dimensions.
N scalars to specify a constant sample distance for each dimension. i.e. dx, dy, dz, …
N arrays to specify the coordinates of the values along each dimension of F. The length of the array must match the size of the corresponding dimension
Any combination of N scalars/arrays with the meaning of 2. and 3.
If axis is given, the number of varargs must equal the number of axes. Default: 1.
Gradient is calculated using Nth order accurate differences at the boundaries. Default: 1.
New in version 1.9.1.
Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the axes of the input array. axis may be negative, in which case it counts from the last to the first axis.
New in version 1.11.0.
A list of ndarrays (or a single ndarray if there is only one dimension) corresponding to the derivatives of f with respect to each dimension. Each derivative has the same shape as f.
Notes
Assuming that \(f\in C^{3}\) (i.e., \(f\) has at least 3 continuous derivatives) and let \(h_{*}\) be a nonhomogeneous stepsize, we minimize the “consistency error” \(\eta_{i}\) between the true gradient and its estimate from a linear combination of the neighboring gridpoints:
By substituting \(f(x_{i} + h_{d})\) and \(f(x_{i}  h_{s})\) with their Taylor series expansion, this translates into solving the following the linear system:
The resulting approximation of \(f_{i}^{(1)}\) is the following:
It is worth noting that if \(h_{s}=h_{d}\) (i.e., data are evenly spaced) we find the standard second order approximation:
With a similar procedure the forward/backward approximations used for boundaries can be derived.
References
Quarteroni A., Sacco R., Saleri F. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer.
Durran D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer.
Fornberg B. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699706. PDF.
Examples
>>> f = np.array([1, 2, 4, 7, 11, 16], dtype=float)
>>> np.gradient(f)
array([1. , 1.5, 2.5, 3.5, 4.5, 5. ])
>>> np.gradient(f, 2)
array([0.5 , 0.75, 1.25, 1.75, 2.25, 2.5 ])
Spacing can be also specified with an array that represents the coordinates of the values F along the dimensions. For instance a uniform spacing:
>>> x = np.arange(f.size)
>>> np.gradient(f, x)
array([1. , 1.5, 2.5, 3.5, 4.5, 5. ])
Or a non uniform one:
>>> x = np.array([0., 1., 1.5, 3.5, 4., 6.], dtype=float)
>>> np.gradient(f, x)
array([1. , 3. , 3.5, 6.7, 6.9, 2.5])
For two dimensional arrays, the return will be two arrays ordered by axis. In this example the first array stands for the gradient in rows and the second one in columns direction:
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float))
[array([[ 2., 2., 1.],
[ 2., 2., 1.]]), array([[1. , 2.5, 4. ],
[1. , 1. , 1. ]])]
In this example the spacing is also specified: uniform for axis=0 and non uniform for axis=1
>>> dx = 2.
>>> y = [1., 1.5, 3.5]
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), dx, y)
[array([[ 1. , 1. , 0.5],
[ 1. , 1. , 0.5]]), array([[2. , 2. , 2. ],
[2. , 1.7, 0.5]])]
It is possible to specify how boundaries are treated using edge_order
>>> x = np.array([0, 1, 2, 3, 4])
>>> f = x**2
>>> np.gradient(f, edge_order=1)
array([1., 2., 4., 6., 7.])
>>> np.gradient(f, edge_order=2)
array([0., 2., 4., 6., 8.])
The axis keyword can be used to specify a subset of axes of which the gradient is calculated
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), axis=0)
array([[ 2., 2., 1.],
[ 2., 2., 1.]])
Multiresolution GaussSeidel solver using Vtype cycles
Multiresolution GaussSeidel solver: solves the GaussNewton linear system by first filtering (GSiterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the Vcycle proposed by Bruhn and Weickert[Bruhn05].
number of levels of the multiresolution algorithm (it will be called recursively until level n == 0)
the number of iterations at each multiresolution level
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.
the gradient of the moving image
righthand side of the linear system to be solved in the Weickert’s multiresolution algorithm
smoothness parameter, the larger its value the smoother the displacement field
the displacement field to start the optimization from
iteration
References
estimation: combining the highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
Multiresolution GaussSeidel solver using Vtype cycles
Multiresolution GaussSeidel solver: solves the linear system by first filtering (GSiterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the Vcycle proposed by Bruhn and Weickert[1]. [1] Andres Bruhn and Joachim Weickert, “Towards ultimate motion estimation:
combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
number of levels of the multiresolution algorithm (it will be called recursively until level n == 0)
the number of iterations at each multiresolution level
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.
the gradient of the moving image
righthand side of the linear system to be solved in the Weickert’s multiresolution algorithm
smoothness parameter, the larger its value the smoother the displacement field
the displacement field to start the optimization from
iteration
ParzenJointHistogram
Bases: object
Methods

Bin index associated with the given normalized intensity 
Maps intensity x to the range covered by the moving histogram 

Maps intensity x to the range covered by the static histogram 


Compute histogram settings to store the PDF of input images 

Computes the Gradient of the joint PDF w.r.t. 

Computes the Gradient of the joint PDF w.r.t. 

Computes the Probability Density Functions of two images 

Computes the Probability Density Functions from a set of samples 
Computes joint histogram and derivatives with Parzen windows
Base class to compute joint and marginal probability density functions and their derivatives with respect to a transform’s parameters. The smooth histograms are computed by using Parzen windows [Parzen62] with a cubic spline kernel, as proposed by Mattes et al. [Mattes03]. This implementation is not tied to any optimization (registration) method, the idea is that informationtheoretic matching functionals (such as Mutual Information) can inherit from this class to perform the lowlevel computations of the joint intensity distributions and its gradient w.r.t. the transform parameters. The derived class can then compute the similarity/dissimilarity measure and gradient, and finally communicate the results to the appropriate optimizer.
the number of bins of the joint and marginal probability density functions (the actual number of bins of the joint PDF is nbins**2)
Notes
We need this class in cython to allow _joint_pdf_gradient_dense_2d and _joint_pdf_gradient_dense_3d to use a nogil Jacobian function (obtained from an instance of the Transform class), which allows us to evaluate Jacobians at all the sampling points (maybe the full grid) inside a nogil loop.
The reason we need a class is to encapsulate all the parameters related to the joint and marginal distributions.
References
function and the mode. Annals of Mathematical Statistics, 33(3), 10651076, 1962.
& Eubank, W. PETCT image registration in the chest using freeform deformations. IEEE Transactions on Medical Imaging, 22(1), 1208, 2003.
Bin index associated with the given normalized intensity
The return value is an integer in [padding, nbins  1  padding]
intensity value normalized to the range covered by the histogram
the bin index associated with the given normalized intensity
Maps intensity x to the range covered by the moving histogram
If the input intensity is in [self.mmin, self.mmax] then the normalized intensity will be in [self.padding, self.nbins  self.padding]
the intensity to be normalized
normalized intensity to the range covered by the moving histogram
Maps intensity x to the range covered by the static histogram
If the input intensity is in [self.smin, self.smax] then the normalized intensity will be in [self.padding, self.nbins  self.padding]
the intensity to be normalized
normalized intensity to the range covered by the static histogram
Compute histogram settings to store the PDF of input images
static image
moving image
mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, the behaviour is equivalent to smask=ones_like(static)
mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, the behaviour is equivalent to mmask=ones_like(static)
Computes the Gradient of the joint PDF w.r.t. transform parameters
Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter.
The gradient is stored in self.joint_grad.
parameters of the transformation to compute the gradient from
the transformation with respect to whose parameters the gradient must be computed
static image
moving image
we assume that both images have already been sampled at a common grid. This transform must map voxel coordinates of this common grid to physical coordinates of its corresponding voxel in the moving image. For example, if the moving image was sampled on the static image’s grid (this is the typical setting) using an aligning matrix A, then
grid2world = A.dot(static_affine)
where static_affine is the transformation mapping static image’s grid coordinates to physical space.
the gradient of the moving image
mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). The default is None, indicating all voxels are considered.
mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). The default is None, indicating all voxels are considered.
Computes the Gradient of the joint PDF w.r.t. transform parameters
Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter.
The list of intensities sval and mval are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting gradient corresponds to the paired intensities according to the alignment at the moment the images were sampled.
The gradient is stored in self.joint_grad.
parameters to compute the gradient at
the transformation with respect to whose parameters the gradient must be computed
sampled intensities from the static image at sampled_points
sampled intensities from the moving image at sampled_points
coordinates (in physical space) of the points the images were sampled at
the gradient of the moving image at the sample points
Computes the Probability Density Functions of two images
The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively.
static image
moving image
mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, ones_like(static) is used as mask.
mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, ones_like(moving) is used as mask.
Computes the Probability Density Functions from a set of samples
The list of intensities sval and mval are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting distributions corresponds to the paired intensities according to the alignment at the moment the images were sampled.
The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively.
sampled intensities from the static image at sampled_points
sampled intensities from the moving image at sampled_points
Computes the mutual information and its gradient (if requested)
the joint intensity distribution
the gradient of the joint distribution w.r.t. the transformation parameters
the marginal intensity distribution of the static image
the marginal intensity distribution of the moving image
the buffer in which to write the gradient of the mutual information. If None, the gradient is not computed
Take floor(total_voxels/k) samples from a (2D or 3D) grid
The sampling is made by taking all pixels whose index (in lexicographical order) is a multiple of k. Each selected point is slightly perturbed by adding a realization of a normally distributed random variable and then mapped to physical space by the given gridtospace transform.
The lexicographical order of a pixels in a grid of shape (a, b, c) is defined by assigning to each voxel position (i, j, k) the integer index
F((i, j, k)) = i * (b * c) + j * (c) + k
and sorting increasingly by this index.
the sampling rate, as described before
the shape of the grid to be sampled
the gridtospace transform
the standard deviation of the Normal random distortion to be applied to the sampled points
the matrix whose rows are the sampled points
Examples
>>> from dipy.align.parzenhist import sample_domain_regular
>>> import dipy.align.vector_fields as vf
>>> shape = np.array((10, 10), dtype=np.int32)
>>> sigma = 0
>>> dim = len(shape)
>>> grid2world = np.eye(dim+1)
>>> n = shape[0]*shape[1]
>>> k = 2
>>> samples = sample_domain_regular(k, shape, grid2world, sigma)
>>> (samples.shape[0], samples.shape[1]) == (n//k, dim)
True
>>> isamples = np.array(samples, dtype=np.int32)
>>> indices = (isamples[:, 0] * shape[1] + isamples[:, 1])
>>> len(set(indices)) == len(indices)
True
>>> (indices%k).sum()
0
Apply an affine transformation.
Given an output image pixel index vector o
, the pixel value
is determined from the input image at position
np.dot(matrix, o) + offset
.
This does ‘pull’ (or ‘backward’) resampling, transforming the output space
to the input to locate data. Affine transformations are often described in
the ‘push’ (or ‘forward’) direction, transforming input to output. If you
have a matrix for the ‘push’ transformation, use its inverse
(numpy.linalg.inv()
) in this function.
The input array.
The inverse coordinate transformation matrix, mapping output
coordinates to input coordinates. If ndim
is the number of
dimensions of input
, the given matrix must have one of the
following shapes:
(ndim, ndim)
: the linear transformation matrix for each output coordinate.
(ndim,)
: assume that the 2D transformation matrix is diagonal, with the diagonal specified by the given value. A more efficient algorithm is then used that exploits the separability of the problem.
(ndim + 1, ndim + 1)
: assume that the transformation is specified using homogeneous coordinates [1]. In this case, any value passed tooffset
is ignored.
(ndim, ndim + 1)
: as above, but the bottom row of a homogeneous transformation matrix is always[0, 0, ..., 1]
, and may be omitted.
The offset into the array where the transform is applied. If a float, offset is the same for each axis. If a sequence, offset should contain one value for each axis.
Shape tuple.
The array in which to place the output, or the dtype of the returned array. By default an array of the same dtype as input will be created.
The order of the spline interpolation, default is 3. The order has to be in the range 05.
The mode parameter determines how the input array is extended beyond its boundaries. Default is ‘constant’. Behavior for each valid value is as follows (see additional plots and details on boundary modes):
The input is extended by reflecting about the edge of the last pixel. This mode is also sometimes referred to as halfsample symmetric.
This is a synonym for ‘reflect’.
The input is extended by filling all values beyond the edge with the same constant value, defined by the cval parameter. No interpolation is performed beyond the edges of the input.
The input is extended by filling all values beyond the edge with the same constant value, defined by the cval parameter. Interpolation occurs for samples outside the input’s extent as well.
The input is extended by replicating the last pixel.
The input is extended by reflecting about the center of the last pixel. This mode is also sometimes referred to as wholesample symmetric.
The input is extended by wrapping around to the opposite edge.
The input is extended by wrapping around to the opposite edge, but in a way such that the last point and initial point exactly overlap. In this case it is not well defined which sample will be chosen at the point of overlap.
Value to fill past edges of input if mode is ‘constant’. Default is 0.0.
Determines if the input array is prefiltered with spline_filter before interpolation. The default is True, which will create a temporary float64 array of filtered values if order > 1. If setting this to False, the output will be slightly blurred if order > 1, unless the input is prefiltered, i.e. it is the result of calling spline_filter on the original input.
The transformed input.
Notes
The given matrix and offset are used to find for each point in the output the corresponding coordinates in the input by an affine transformation. The value of the input at those coordinates is determined by spline interpolation of the requested order. Points outside the boundaries of the input are filled according to the given mode.
Changed in version 0.18.0: Previously, the exact interpretation of the affine transformation
depended on whether the matrix was supplied as a 1D or a
2D array. If a 1D array was supplied
to the matrix parameter, the output pixel value at index o
was determined from the input image at position
matrix * (o + offset)
.
For complexvalued input, this function transforms the real and imaginary components independently.
New in version 1.6.0: Complexvalued support added.
References
Determine the effective number of processes for parallelization.
For num_processes = None` return the maximum number of cores retrieved
by cpu_count().
For num_processes > 0
, return this value.
For num_processes < 0
, return the maximal number of cores minus
num_processes + 1
. In particular num_processes = 1
will use as
many cores as possible.
For num_processes = 0
a ValueError is raised.
Desired number of processes to be used.
Reslice data with new voxel resolution defined by new_zooms
.
3d volume or 4d volume with datasets
mapping from voxel coordinates to world coordinates
voxel size for (i,j,k) dimensions
new voxel size for (i,j,k) after resampling
order of interpolation for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don’t want any smoothing 0 is the option you need.
Points outside the boundaries of the input are filled according to the given mode.
Value used for points outside the boundaries of the input if mode=’constant’.
Split the calculation to a pool of children processes. This only
applies to 4D data arrays. Default is 1. If < 0 the maximal number
of cores minus num_processes + 1
is used (enter 1 to use as many
cores as possible). 0 raises an error.
datasets resampled into isotropic voxel size
new affine for the resampled image
Examples
>>> from dipy.io.image import load_nifti
>>> from dipy.align.reslice import reslice
>>> from dipy.data import get_fnames
>>> f_name = get_fnames('aniso_vox')
>>> data, affine, zooms = load_nifti(f_name, return_voxsize=True)
>>> data.shape == (58, 58, 24)
True
>>> zooms
(4.0, 4.0, 5.0)
>>> new_zooms = (3.,3.,3.)
>>> new_zooms
(3.0, 3.0, 3.0)
>>> data2, affine2 = reslice(data, affine, zooms, new_zooms)
>>> data2.shape == (77, 77, 40)
True
IsotropicScaleSpace
Bases: dipy.align.scalespace.ScaleSpace
Methods

Voxeltospace transformation at a given level. 

Spacetovoxel transformation at a given level. 

Shape the subsampled image must have at a particular level. 

Ratio of voxel size from pyramid level from_level to to_level. 

Smoothed image at a given level. 

Adjustment factor for inputspacing to reflect voxel sizes at level. 

Smoothing parameters used at a given level. 

Spacings the subsampled image must have at a particular level. 

Prints properties of a pyramid level. 
IsotropicScaleSpace.
Computes the Scale Space representation of an image using isotropic smoothing kernels for all scales. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with different smoothing parameters.
This specialization of ScaleSpace allows the user to provide custom scale and smoothing factors for all scales.
slices, r is the number of rows and c is the number of columns of the input image.
custom scale factors to build the scale space (one factor for each scale).
custom smoothing parameter to build the scale space (one parameter for each scale).
the gridtospace transform of the image grid. The default is the identity matrix.
the spacing (voxel size) between voxels in physical space. The default if 1.0 along all axes.
if True, all smoothed images will be zero at all voxels that are zero in the input image. The default is False.
ScaleSpace
Bases: object
Methods

Voxeltospace transformation at a given level. 

Spacetovoxel transformation at a given level. 

Shape the subsampled image must have at a particular level. 

Ratio of voxel size from pyramid level from_level to to_level. 

Smoothed image at a given level. 

Adjustment factor for inputspacing to reflect voxel sizes at level. 

Smoothing parameters used at a given level. 

Spacings the subsampled image must have at a particular level. 

Prints properties of a pyramid level. 
ScaleSpace.
Computes the Scale Space representation of an image. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with increasing smoothing parameter. If the image’s voxels are isotropic, the smoothing will be the same along all directions: at level L = 0, 1, …, the sigma is given by \(s * ( 2^L  1 )\). If the voxel dimensions are not isotropic, then the smoothing is weaker along low resolution directions.
slices, r is the number of rows and c is the number of columns of the input image.
the desired number of levels (resolutions) of the scale space
the gridtospace transform of the image grid. The default is the identity matrix
the spacing (voxel size) between voxels in physical space. The default is 1.0 along all axes
the smoothing factor to be used in the construction of the scale space. The default is 0.2
if True, all smoothed images will be zero at all voxels that are zero in the input image. The default is False.
Voxeltospace transformation at a given level.
Returns the voxeltospace transformation associated with the subsampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get affine transform from
Spacetovoxel transformation at a given level.
Returns the spacetovoxel transformation associated with the subsampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the inverse transform from
Shape the subsampled image must have at a particular level.
Returns the shape the subsampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the subsampled shape from
Ratio of voxel size from pyramid level from_level to to_level.
Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to ‘expand’ them to level b).
the resolution to expand voxels from
the resolution to expand voxels to
the expand factors (a scalar for each voxel dimension)
Smoothed image at a given level.
Returns the smoothed image at the requested level in the Scale Space.
the scale space level to get the smooth image from
Adjustment factor for inputspacing to reflect voxel sizes at level.
Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level.
the scale space level to get the scalings from
Smoothing parameters used at a given level.
Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space
the scale space level to get the smoothing parameters from
Spacings the subsampled image must have at a particular level.
Returns the spacings (voxel sizes) the subsampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the subsampled images must have).
the scale space level to get the subsampled shape from
floating
Multidimensional Gaussian filter.
The input array.
Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.
The order of the filter along each axis is given as a sequence of integers, or as a single number. An order of 0 corresponds to convolution with a Gaussian kernel. A positive order corresponds to convolution with that derivative of a Gaussian.
The array in which to place the output, or the dtype of the returned array. By default an array of the same dtype as input will be created.
The mode parameter determines how the input array is extended when the filter overlaps a border. By passing a sequence of modes with length equal to the number of dimensions of the input array, different modes can be specified along each axis. Default value is ‘reflect’. The valid values and their behavior is as follows:
The input is extended by reflecting about the edge of the last pixel. This mode is also sometimes referred to as halfsample symmetric.
The input is extended by filling all values beyond the edge with the same constant value, defined by the cval parameter.
The input is extended by replicating the last pixel.
The input is extended by reflecting about the center of the last pixel. This mode is also sometimes referred to as wholesample symmetric.
The input is extended by wrapping around to the opposite edge.
For consistency with the interpolation functions, the following mode names can also be used:
This is a synonym for ‘constant’.
This is a synonym for ‘reflect’.
This is a synonym for ‘wrap’.
Value to fill past edges of input if mode is ‘constant’. Default is 0.0.
Truncate the filter at this many standard deviations. Default is 4.0.
Returned array of same shape as input.
Notes
The multidimensional filter is implemented as a sequence of 1D convolution filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision.
Examples
>>> from scipy.ndimage import gaussian_filter
>>> a = np.arange(50, step=2).reshape((5,5))
>>> a
array([[ 0, 2, 4, 6, 8],
[10, 12, 14, 16, 18],
[20, 22, 24, 26, 28],
[30, 32, 34, 36, 38],
[40, 42, 44, 46, 48]])
>>> gaussian_filter(a, sigma=1)
array([[ 4, 6, 8, 9, 11],
[10, 12, 14, 15, 17],
[20, 22, 24, 25, 27],
[29, 31, 33, 34, 36],
[35, 37, 39, 40, 42]])
>>> from scipy import misc
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> plt.gray() # show the filtered result in grayscale
>>> ax1 = fig.add_subplot(121) # left side
>>> ax2 = fig.add_subplot(122) # right side
>>> ascent = misc.ascent()
>>> result = gaussian_filter(ascent, sigma=5)
>>> ax1.imshow(ascent)
>>> ax2.imshow(result)
>>> plt.show()
BundleMinDistanceAsymmetricMetric
Bases: dipy.align.streamlinear.BundleMinDistanceMetric
Asymmetric Bundlebased Minimum distance.
This is a cost function that can be used by the StreamlineLinearRegistration class.
Methods

Distance calculated from this Metric. 

Setup static and moving sets of streamlines. 
An abstract class for the metric used for streamline registration.
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
BundleMinDistanceMatrixMetric
Bases: dipy.align.streamlinear.StreamlineDistanceMetric
Bundlebased Minimum Distance aka BMD
This is the cost function used by the StreamlineLinearRegistration
Notes
The difference with BundleMinDistanceMetric is that this creates the entire distance matrix and therefore requires more memory.
Methods
setup(static, moving) 

distance(xopt) 
An abstract class for the metric used for streamline registration.
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
Distance calculated from this Metric.
List of affine parameters as an 1D vector
Setup static and moving sets of streamlines.
Fixed or reference set of streamlines.
Moving streamlines.
Notes
Call this after the object is initiated and before distance.
Num_threads is not used in this class. Use BundleMinDistanceMetric
for a faster, threaded and less memory hungry metric
BundleMinDistanceMetric
Bases: dipy.align.streamlinear.StreamlineDistanceMetric
Bundlebased Minimum Distance aka BMD
This is the cost function used by the StreamlineLinearRegistration.
References
Garyfallidis et al., “Direct nativespace fiber bundle alignment for group comparisons”, ISMRM, 2014.
Methods
setup(static, moving) 

distance(xopt) 
An abstract class for the metric used for streamline registration.
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
BundleSumDistanceMatrixMetric
Bases: dipy.align.streamlinear.BundleMinDistanceMatrixMetric
Bundlebased Sum Distance aka BMD
This is a cost function that can be used by the StreamlineLinearRegistration class.
Notes
The difference with BundleMinDistanceMatrixMetric is that it uses uses the sum of the distance matrix and not the sum of mins.
Methods
setup(static, moving) 

distance(xopt) 
An abstract class for the metric used for streamline registration.
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
Optimizer
Bases: object
Methods
print_summary 
A class for handling minimization of scalar function of one or more variables.
Objective function.
Initial guess.
Extra arguments passed to the objective function and its derivatives (Jacobian, Hessian).
Type of solver. Should be one of
‘NelderMead’
‘Powell’
‘CG’
‘BFGS’
‘NewtonCG’
‘Anneal’
‘LBFGSB’
‘TNC’
‘COBYLA’
‘SLSQP’
‘dogleg’
‘trustncg’
Jacobian of objective function. Only for CG, BFGS, NewtonCG, dogleg, trustncg. If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of the objective. In this case, it must accept the same arguments as fun.
Hessian of objective function or Hessian of objective function times an arbitrary vector p. Only for NewtonCG, dogleg, trustncg. Only one of hessp or hess needs to be given. If hess is provided, then hessp will be ignored. If neither hess nor hessp is provided, then the hessian product will be approximated using finite differences on jac. hessp must compute the Hessian times an arbitrary vector.
Bounds for variables (only for LBFGSB, TNC and SLSQP).
(min, max)
pairs for each element in x
, defining
the bounds on that parameter. Use None for one of min
or
max
when there is no bound in that direction.
Constraints definition (only for COBYLA and SLSQP). Each constraint is defined in a dictionary with fields:
 typestr
Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.
 funcallable
The function defining the constraint.
 jaccallable, optional
The Jacobian of fun (only for SLSQP).
 argssequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be nonnegative. Note that COBYLA only supports inequality constraints.
Tolerance for termination. For detailed control, use solverspecific options.
Called after each iteration, as callback(xk)
, where xk
is
the current parameter vector. Only available using Scipy >= 0.12.
A dictionary of solver options. All methods accept the following generic options:
 maxiterint
Maximum number of iterations to perform.
 dispbool
Set to True to print convergence messages.
For methodspecific options, see show_options(‘minimize’, method).
save history of x for each iteration. Only available using Scipy >= 0.12.
See also
scipy.optimize.minimize
StreamlineDistanceMetric
Bases: object
Methods

calculate distance for current set of parameters. 
setup 
An abstract class for the metric used for streamline registration.
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
StreamlineLinearRegistration
Bases: object
Methods

Find the minimum of the provided metric. 
Linear registration of 2 sets of streamlines [Garyfallidis15].
If None and fast is False then the BMD distance is used. If fast is True then a faster implementation of BMD is used. Otherwise, use the given distance metric.
Initial parametrization for the optimization.
a) 6 elements then only rigid registration is performed with the 3 first elements for translation and 3 for rotation. b) 7 elements also isotropic scaling is performed (similarity). c) 12 elements then translation, rotation (in degrees), scaling and shearing is performed (affine).
Here is an example of x0 with 12 elements:
x0=np.array([0, 10, 0, 40, 0, 0, 2., 1.5, 1, 0.1, 0.5, 0])
This has translation (0, 10, 0), rotation (40, 0, 0) in degrees, scaling (2., 1.5, 1) and shearing (0.1, 0.5, 0).
x0 = np.array([0, 0, 0, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0, 1.])
x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0])
x0 = np.array([0, 0, 0, 0, 0, 0, 1.])
x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])
‘L_BFGS_B’ or ‘Powell’ optimizers can be used. Default is ‘L_BFGS_B’.
If method == ‘L_BFGS_B’ then we can use bounded optimization. For example for the six parameters of rigid rotation we can set the bounds = [(30, 30), (30, 30), (30, 30),
(45, 45), (45, 45), (45, 45)]
That means that we have set the bounds for the three translations and three rotation axes (in degrees).
If True, if True then information about the optimization is shown. Default: False.
Extra options to be used with the selected method.
If True save the transformation for each iteration of the optimizer. Default is False. Supported only with Scipy >= 0.11.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
References
Garyfallidis et al. “Robust and efficient linear registration of whitematter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015
Garyfallidis et al., “Direct nativespace fiber bundle alignment for group comparisons”, ISMRM, 2014.
Garyfallidis et al. Recognition of white matter bundles using local and global streamlinebased registration and clustering, Neuroimage, 2017.
Find the minimum of the provided metric.
Reference or fixed set of streamlines.
Moving set of streamlines.
Transformation (4, 4) matrix to start the registration. mat
is applied to moving. Default value None which means that initial
transformation will be generated by shifting the centers of moving
and static sets of streamlines to the origin.
StreamlineRegistrationMap
Bases: object
Methods

Transform moving streamlines to the static. 
A map holding the optimum affine matrix and some other parameters of the optimization
4x4 affine matrix which transforms the moving to the static streamlines
1d array with the parameters of the transformation after centering
final value of the metric
All transformation matrices created during the optimization
Number of function evaluations of the optimizer
Number of iterations of the optimizer
Streamlines
MDFbased pairwise distance optimization function (MIN).
We minimize the distance between moving streamlines as they align with the static streamlines.
t is a vector of affine transformation parameters with size at least 6. If size is 6, t is interpreted as translation + rotation. If size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.
Static streamlines
Moving streamlines.
MDFbased pairwise distance optimization function (MIN).
We minimize the distance between moving streamlines as they align with the static streamlines.
1D array. t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.
N*M x 3 array. All the points of the static streamlines. With order of streamlines intact. Where N is the number of streamlines and M is the number of points per streamline.
K*M x 3 array. All the points of the moving streamlines. With order of streamlines intact. Where K is the number of streamlines and M is the number of points per streamline.
Number of points per streamline. All streamlines in static and moving should have the same number of points M.
MDFbased pairwise distance optimization function (MIN).
We minimize the distance between moving streamlines as they align with the static streamlines.
1D array. t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.
N*M x 3 array. All the points of the static streamlines. With order of streamlines intact. Where N is the number of streamlines and M is the number of points per streamline.
K*M x 3 array. All the points of the moving streamlines. With order of streamlines intact. Where K is the number of streamlines and M is the number of points per streamline.
Number of points per streamline. All streamlines in static and moving should have the same number of points M.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error.
Notes
This is a faster implementation of bundle_min_distance
, which requires
that all the points of each streamline are allocated into an ndarray
(of shape N*M by 3, with N the number of points per streamline and M the
number of streamlines). This can be done by calling
dipy.tracking.streamlines.unlist_streamlines.
MDF distance optimization function (SUM).
We minimize the distance between moving streamlines as they align with the static streamlines.
t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.
Static streamlines
Moving streamlines. These will be transformed to align with the static streamlines
Number of threads. If 1 then all available threads will be used.
Return 4x4 transformation matrix from sequence of transformations.
Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
This is the inverse of the decompose_matrix
function.
Scaling factors.
Shear factors for xy, xz, yz axes.
Euler angles about static x, y, z axes.
Translation vector along x, y, z axes.
Perspective partition of matrix.
Examples
>>> import math
>>> import numpy as np
>>> import dipy.core.geometry as gm
>>> scale = np.random.random(3)  0.5
>>> shear = np.random.random(3)  0.5
>>> angles = (np.random.random(3)  0.5) * (2*math.pi)
>>> trans = np.random.random(3)  0.5
>>> persp = np.random.random(4)  0.5
>>> M0 = gm.compose_matrix(scale, shear, angles, trans, persp)
Compose a 4x4 transformation matrix.
This is a 1D vector of affine transformation parameters with size at least 3. If the size is 3, t is interpreted as translation. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If the size is 9, t is interpreted as translation + rotation + anisotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.
Homogeneous transformation matrix of size 4x4.
Return sequence of transformations from transformation matrix.
Code modified from the excellent work of Christoph Gohlke link provided here: http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
Nondegenerative homogeneous transformation matrix
Three scaling factors.
Shear factors for xy, xz, yz axes.
Euler angles about static x, y, z axes.
Translation vector along x, y, z axes.
Perspective partition of matrix.
If matrix is of wrong type or degenerative.
Examples
>>> import numpy as np
>>> T0=np.diag([2,1,1,1])
>>> scale, shear, angles, trans, persp = decompose_matrix(T0)
Given a 4x4 homogeneous matrix return the parameter vector.
Homogeneous 4x4 transformation matrix
Size of the output vector. 3, for translation, 6 for rigid, 7 for similarity, 9 for scaling and 12 for affine. Default is 12.
One dimensional ndarray of 3, 6, 7, 9 or 12 affine parameters.
Minimum direct flipped distance matrix between two streamline sets
All streamlines need to have the same number of points
of streamlines as arrays, [(N, 3) .. (N, 3)]
of streamlines as arrays, [(N, 3) .. (N, 3)]
distance matrix
Euclidean length of streamlines
Length is in mm only if streamlines are expressed in world coordinates.
dipy.tracking.Streamlines
If ndarray, must have shape (N,3) where N is the number of points
of the streamline.
If list, each item must be ndarray shape (Ni,3) where Ni is the number
of points of streamline i.
If dipy.tracking.Streamlines
, its common_shape must be 3.
If there is only one streamline, a scalar representing the length of the streamline. If there are several streamlines, ndarray containing the length of every streamline.
Examples
>>> from dipy.tracking.streamline import length
>>> import numpy as np
>>> streamline = np.array([[1, 1, 1], [2, 3, 4], [0, 0, 0]])
>>> expected_length = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2]).sum()
>>> length(streamline) == expected_length
True
>>> streamlines = [streamline, np.vstack([streamline, streamline[::1]])]
>>> expected_lengths = [expected_length, 2*expected_length]
>>> lengths = [length(streamlines[0]), length(streamlines[1])]
>>> np.allclose(lengths, expected_lengths)
True
>>> length([])
0.0
>>> length(np.array([[1, 2, 3]]))
0.0
Progressive SLR.
This is an utility function that allows for example to do affine registration using Streamlinebased Linear Registration (SLR) [Garyfallidis15] by starting with translation first, then rigid, then similarity, scaling and finally affine.
Similarly, if for example, you want to perform rigid then you start with translation first. This progressive strategy can helps with finding the optimal parameters of the final transformation.
Could be any of ‘translation’, ‘rigid’, ‘similarity’, ‘scaling’, ‘affine’
Boundaries of registration parameters. See variable DEFAULT_BOUNDS for example.
L_BFGS_B’ or ‘Powell’ optimizers can be used. Default is ‘L_BFGS_B’.
If True, log messages. Default:
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
References
Garyfallidis et al. “Robust and efficient linear registration of whitematter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015
Run QuickBundlesX and then run again on the centroids of the last layer
Running again QuickBundles at a layer has the effect of merging some of the clusters that maybe originally devided because of branching. This function help obtain a result at a QuickBundles quality but with QuickBundlesX speed. The merging phase has low cost because it is applied only on the centroids rather than the entire dataset.
List of distance thresholds for QuickBundlesX.
Number of points for discretizing each streamline
Randomly select a specific number of streamlines. If None all the streamlines are used.
If None then RandomState is initialized internally.
If True, log information. Default False.
Contains the clusters of the last layer of QuickBundlesX after merging.
References
Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
Garyfallidis E. et al. QuickBundlesX: Sequential clustering of millions of streamlines in multiple levels of detail at record execution time. Proceedings of the, International Society of Magnetic Resonance in Medicine (ISMRM). Singapore, 4187, 2016.
Select a random set of streamlines
Object of 2D ndarrays of shape[1]==3
Number of streamlines to select. If there are less streamlines
than select
then select=len(streamlines)
.
Default None.
Notes
The same streamline will not be selected twice.
(either by downsampling or upsampling)
Change the number of points of streamlines in order to obtain nb_points1 segments of equal length. Points of streamlines will be modified along the curve.
dipy.tracking.Streamlines
If ndarray, must have shape (N,3) where N is the number of points
of the streamline.
If list, each item must be ndarray shape (Ni,3) where Ni is the number
of points of streamline i.
If dipy.tracking.Streamlines
, its common_shape must be 3.
integer representing number of points wanted along the curve.
dipy.tracking.Streamlines
Results of the downsampling or upsampling process.
Examples
>>> from dipy.tracking.streamline import set_number_of_points
>>> import numpy as np
One streamline, a semicircle:
>>> theta = np.pi*np.linspace(0, 1, 100)
>>> x = np.cos(theta)
>>> y = np.sin(theta)
>>> z = 0 * x
>>> streamline = np.vstack((x, y, z)).T
>>> modified_streamline = set_number_of_points(streamline, 3)
>>> len(modified_streamline)
3
Multiple streamlines:
>>> streamlines = [streamline, streamline[::2]]
>>> new_streamlines = set_number_of_points(streamlines, 10)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in new_streamlines]
[10, 10]
Utility function for registering large tractograms.
For efficiency, we apply the registration on cluster centroids and remove small clusters.
rigid, similarity or affine transformation model (default affine)
Remove clusters that have less than rm_small_clusters (default 50)
If not, None selects a random number of streamlines to apply clustering Default None.
If True, logs information about optimization. Default: False
Keep streamlines that have length greater than this value (default 50)
Keep streamlines have length less than this value (default 250)
Thresholds for QuickBundlesX (default [40, 30, 20, 15])
Number of points for discretizing each streamline (default 20)
(default True)
If None creates RandomState in function.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
Notes
The order of operations is the following. First short or long streamlines are removed. Second, the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR [Garyfallidis15] is applied.
References
Garyfallidis et al. “Robust and efficient linear
registration of whitematter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015 .. [R778a6c20f622Garyfallidis14] Garyfallidis et al., “Direct nativespace fiber
bundle alignment for group comparisons”, ISMRM, 2014.
Garyfallidis et al. Recognition of white matter
bundles using local and global streamlinebased registration and clustering, Neuroimage, 2017.
Apply affine transformation to streamlines
Streamlines object
transformation matrix
If True then change data in place. Be careful changes input streamlines.
Sequence transformed 2D ndarrays of shape[1]==3
Utility function for registering large tractograms.
For efficiency, we apply the registration on cluster centroids and remove small clusters.
rigid, similarity or affine transformation model (default affine)
Remove clusters that have less than rm_small_clusters (default 50)
If not, None selects a random number of streamlines to apply clustering Default None.
If True, logs information about optimization. Default: False
Keep streamlines that have length greater than this value (default 50)
Keep streamlines have length less than this value (default 250)
Thresholds for QuickBundlesX (default [40, 30, 20, 15])
Number of points for discretizing each streamline (default 20)
(default True)
If None creates RandomState in function.
Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus num_threads + 1 is used (enter 1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.
Notes
The order of operations is the following. First short or long streamlines are removed. Second, the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR [Garyfallidis15] is applied.
References
Garyfallidis et al. “Robust and efficient linear
registration of whitematter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015 .. [R9eb8c2315518Garyfallidis14] Garyfallidis et al., “Direct nativespace fiber
bundle alignment for group comparisons”, ISMRM, 2014.
Garyfallidis et al. Recognition of white matter
bundles using local and global streamlinebased registration and clustering, Neuroimage, 2017.
Sum of squared differences between two 2D images
Computes the Sum of Squared Differences between the static and moving image. Those differences are given by delta_field
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the SSD energy at this iteration
Notes
The numeric value of the energy is used only to detect convergence. This function returns only the energy corresponding to the data term (excluding the energy corresponding to the regularization term) because the GreedySyN algorithm is an unconstrained gradient descent algorithm in the space of diffeomorphisms: in each iteration it makes a step along the negative smoothed gradient –of the data term– and then makes sure the resulting diffeomorphisms are invertible using an explicit inversion algorithm. Since it is not clear how to reflect the energy corresponding to this reprojection to the space of diffeomorphisms, a more precise energy computation including the regularization term is useless. Instead, convergence is checked considering the dataterm energy only and detecting oscilations in the energy profile.
Sum of squared differences between two 3D volumes
Computes the Sum of Squared Differences between the static and moving volume Those differences are given by delta_field
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the SSD energy at this iteration
Notes
The numeric value of the energy is used only to detect convergence. This function returns only the energy corresponding to the data term (excluding the energy corresponding to the regularization term) because the GreedySyN algorithm is an unconstrained gradient descent algorithm in the space of diffeomorphisms: in each iteration it makes a step along the negative smoothed gradient –of the data term– and then makes sure the resulting diffeomorphisms are invertible using an explicit inversion algorithm. Since it is not clear how to reflect the energy corresponding to this reprojection to the space of diffeomorphisms, a more precise energy computation including the regularization term is useless. Instead, convergence is checked considering the dataterm energy only and detecting oscilations in the energy profile.
The residual displacement field to be fit on the next iteration
Computes the residual displacement field corresponding to the current displacement field in the Multiresolution GaussSeidel solver proposed by Bruhn and Weickert [Bruhn05].
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.
the gradient of the moving image
righthand side of the linear system to be solved in the Weickert’s multiresolution algorithm
smoothness parameter in the objective function
the current displacement field to compute the residual from
the displacement field to put the residual to
the residual displacement field. If residual was None a input, then a new field is returned, otherwise the same array is returned
References
estimation: combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
The residual displacement field to be fit on the next iteration
Computes the residual displacement field corresponding to the current displacement field (given by ‘disp’) in the Multiresolution GaussSeidel solver proposed by Bruhn and Weickert [Bruhn].
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.
the gradient of the moving image
righthand side of the linear system to be solved in the Weickert’s multiresolution algorithm
smoothness parameter in the objective function
the current displacement field to compute the residual from
the displacement field to put the residual to
the residual displacement field. If residual was None a input, then a new field is returned, otherwise the same array is returned
References
estimation: combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
Demons step for 2D SSDdriven registration
Computes the demons step for SSDdriven registration ( eq. 4 in [Bruhn05] )
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the gradient of the moving image
parameter controlling the amount of regularization. It corresponds to \(\sigma_x^2\) in algorithm 1 of Vercauteren et al.[Vercauteren09]
if None, a new array will be created to store the demons step. Otherwise the provided array will be used.
the demons step to be applied for updating the current displacement field
the current ssd energy (before applying the returned demons_step)
References
estimation: combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
(2009). Diffeomorphic demons: efficient nonparametric image registration. NeuroImage, 45(1 Suppl), S6172. doi:10.1016/j.neuroimage.2008.10.040
Demons step for 3D SSDdriven registration
Computes the demons step for SSDdriven registration ( eq. 4 in [Bruhn05] )
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the gradient of the moving image
parameter controlling the amount of regularization. It corresponds to \(\sigma_x^2\) in algorithm 1 of Vercauteren et al.[Vercauteren09]
if None, a new array will be created to store the demons step. Otherwise the provided array will be used.
the demons step to be applied for updating the current displacement field
the current ssd energy (before applying the returned demons_step)
References
estimation: combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
(2009). Diffeomorphic demons: efficient nonparametric image registration. NeuroImage, 45(1 Suppl), S6172. doi:10.1016/j.neuroimage.2008.10.040
One iteration of a large linear system solver for 2D SSD registration
Performs one iteration at one level of the Multiresolution GaussSeidel solver proposed by Bruhn and Weickert [Bruhn05].
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.
the gradient of the moving image
righthand side of the linear system to be solved in the Weickert’s multiresolution algorithm
smoothness parameter of the objective function
current displacement field to start the iteration from
the norm of the maximum change in the displacement field after the iteration
References
estimation: combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
One iteration of a large linear system solver for 3D SSD registration
Performs one iteration at one level of the Multiresolution GaussSeidel solver proposed by Bruhn and Weickert [Bruhn05].
the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)
the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.
the gradient of the moving image
righthand side of the linear system to be solved in the Weickert’s multiresolution algorithm
smoothness parameter of the objective function
the displacement field to start the optimization from
the norm of the maximum change in the displacement field after the iteration
References
estimation: combining highest accuracy with realtime performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
Solves a 2variable symmetric positivedefinite linear system
Solves the symmetric positivedefinite linear system \(Mx = y\) given by:
M = [[A[0], A[1]],
[A[1], A[2]]]
the array containing the entries of the symmetric 2x2 matrix
righthand side of the system to be solved
the array the output will be stored in
Solves a 3variable symmetric positivedefinite linear system
Solves the symmetric semipositivedefinite linear system \(Mx = y\) given by \(M = (g g^{T} + \tau I)\).
the vector in the outer product above
righthand side of the system to be solved
\(\tau\) in \(M = (g g^{T} + \tau I)\)
the array the output will be stored in
1 if M is singular, otherwise 0
AffineTransform2D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
AffineTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
RigidIsoScalingTransform2D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
RigidIsoScalingTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Rigid isoscaling transform in 3D.
(rotation + translation + scaling) The parameter vector theta of length 7 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : isotropic scaling
RigidScalingTransform2D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Rigid scaling transform in 2D.
(rotation + translation + scaling)
The parameter vector theta of length 5 is interpreted as follows: theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis theta[3] : scaling along the x axis theta[4] : scaling along the y axis
RigidScalingTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Rigid Scaling transform in 3D (rotation + translation + scaling).
The parameter vector theta of length 9 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : scaling in the x axis theta[7] : scaling in the y axis theta[8] : scaling in the z axis
RigidTransform2D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
RigidTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Rigid transform in 3D (rotation + translation) The parameter vector theta of length 6 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis
RotationTransform2D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
RotationTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
ScalingTransform2D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
ScalingTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Transform
Bases: object
Base class (contract) for all transforms for affine image registration Each transform must define the following (fast, nogil) methods:
_jacobian(theta, x, J): receives a parameter vector theta, a point in x, and a matrix J with shape (dim, len(theta)). It must writes in J, the Jacobian of the transform with parameters theta evaluated at x.
_get_identity_parameters(theta): receives a vector theta whose length is the number of parameters of the transform and sets in theta the values that define the identity transform.
_param_to_matrix(theta, T): receives a parameter vector theta, and a matrix T of shape (dim + 1, dim + 1) and writes in T the matrix representation of the transform with parameters theta
This base class defines the (slow, convenient) python wrappers for each of the above functions, which also do parameter checking and raise a ValueError in case the provided parameters are invalid.
Methods
Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Parameter values corresponding to the identity transform
the n parameter values corresponding to the identity transform
Jacobian function of this transform
vector containing the n parameters of this transform
vector containing the point where the Jacobian must be evaluated
Jacobian matrix of the transform with parameters theta at point x
TranslationTransform2D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
TranslationTransform3D
Bases: dipy.align.transforms.Transform
Methods

Parameter values corresponding to the identity transform 

Jacobian function of this transform 

Matrix representation of this transform with the given parameters 
get_dim 

get_number_of_parameters 
Computes the composition of two 2D displacement fields
Computes the composition of the two 2D displacements d1 and d2. The evaluation of d2 at nonlattice points is computed using trilinear interpolation. The actual composition is computed as:
comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ]
where t = time_scaling, A = premult_index and B=premult_disp and i denotes the voxel coordinates of a voxel in d1’s grid. Using this parameters it is possible to compose vector fields with arbitrary discretizations: let R and S be the voxeltospace transformation associated to d1 and d2, respectively then the composition at a voxel with coordinates i in d1’s grid is given by:
comp[i] = d1[i] + R*i + d2[Sinv*(R*i + d1[i])]  R*i
(the R*i terms cancel each other) where Sinv = S^{1} we can then define A = Sinv * R and B = Sinv to compute the composition using this function.
first displacement field to be applied. R, C are the number of rows and columns of the displacement field, respectively.
second displacement field to be applied. R’, C’ are the number of rows and columns of the displacement field, respectively.
the matrix A in the explanation above
the matrix B in the explanation above
this corresponds to the time scaling ‘t’ in the above explanation
the buffer to write the composition to. If None, the buffer is created internally
on output, this array will contain the composition of the two fields
on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation)
Computes the composition of two 3D displacement fields
Computes the composition of the two 3D displacements d1 and d2. The evaluation of d2 at nonlattice points is computed using trilinear interpolation. The actual composition is computed as:
comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ]
where t = time_scaling, A = premult_index and B=premult_disp and i denotes the voxel coordinates of a voxel in d1’s grid. Using this parameters it is possible to compose vector fields with arbitrary discretization: let R and S be the voxeltospace transformation associated to d1 and d2, respectively then the composition at a voxel with coordinates i in d1’s grid is given by:
comp[i] = d1[i] + R*i + d2[Sinv*(R*i + d1[i])]  R*i
(the R*i terms cancel each other) where Sinv = S^{1} we can then define A = Sinv * R and B = Sinv to compute the composition using this function.
first displacement field to be applied. S, R, C are the number of slices, rows and columns of the displacement field, respectively.
second displacement field to be applied. R’, C’ are the number of rows and columns of the displacement field, respectively.
the matrix A in the explanation above
the matrix B in the explanation above
this corresponds to the time scaling ‘t’ in the above explanation
the buffer to write the composition to. If None, the buffer will be created internally
on output, this array will contain the composition of the two fields
on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation)
Notes
If d1[s,r,c] lies outside the domain of d2, then comp[s,r,c] will contain a zero vector.
Create a binary 2D image where pixel values are 1 iff their distance to the center of the image is less than or equal to radius.
number of rows of the resulting image
number of columns of the resulting image
the radius of the circle
the binary image of the circle with the requested dimensions
Creates an invertible 2D displacement field
Creates the invertible displacement fields used in Chen et al. eqs. 9 and 10 [1]
number of rows in the resulting harmonic field
number of columns in the resulting harmonic field
parameters of the harmonic field (as in [1]). To understand the effect of these parameters, please consider plotting a deformed image (a circle or a grid) under the deformation field, or see examples in [1]
the harmonic displacement field
the analytical inverse of the harmonic displacement field
A simple fixedpoint approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107
Creates an invertible 3D displacement field
Creates the invertible displacement fields used in Chen et al. eqs. 9 and 10 [1] computing the angle theta along zslides.
number of slices in the resulting harmonic field
number of rows in the resulting harmonic field
number of columns in the resulting harmonic field
parameters of the harmonic field (as in [1]). To understand the effect of these parameters, please consider plotting a deformed image (e.g. a circle or a grid) under the deformation field, or see examples in [1]
the harmonic displacement field
the analytical inverse of the harmonic displacement field
A simple fixedpoint approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107
Creates a random 2D displacement ‘exactly’ mapping points of two grids
Creates a random 2D displacement field mapping points of an input discrete domain (with dimensions given by from_shape) to points of an output discrete domain (with shape given by to_shape). The affine matrices bringing discrete coordinates to physical space are given by from_grid2world (for the displacement field discretization) and to_grid2world (for the target discretization). Since this function is intended to be used for testing, voxels in the input domain will never be assigned to boundary voxels on the output domain.
the grid shape where the displacement field will be defined on.
the gridtospace transformation of the displacement field
the grid shape where the deformation field will map the input grid to.
the gridtospace transformation of the mapped grid
the random displacement field in the physical domain
the assignment of each point in the input grid to the target grid
Creates a random 3D displacement ‘exactly’ mapping points of two grids
Creates a random 3D displacement field mapping points of an input discrete domain (with dimensions given by from_shape) to points of an output discrete domain (with shape given by to_shape). The affine matrices bringing discrete coordinates to physical space are given by from_grid2world (for the displacement field discretization) and to_grid2world (for the target discretization). Since this function is intended to be used for testing, voxels in the input domain will never be assigned to boundary voxels on the output domain.
the grid shape where the displacement field will be defined on.
the gridtospace transformation of the displacement field
the grid shape where the deformation field will map the input grid to.
the gridtospace transformation of the mapped grid
the random displacement field in the physical domain
the assignment of each point in the input grid to the target grid
Create a binary 3D image where voxel values are 1 iff their distance to the center of the image is less than or equal to radius.
number if slices of the resulting image
number of rows of the resulting image
number of columns of the resulting image
the radius of the sphere
the binary image of the sphere with the requested dimensions
Downsamples the 2D input vector field by a factor of 2 Downsamples the input vector field by a factor of 2. The value at each pixel of the resulting field is the average of its surrounding pixels in the original field.
the vector field to be downsampled