core
core.geometry
core.gradients
core.graph
core.histeq
core.ndindex
core.onetime
core.optimize
core.profile
core.rng
core.sphere
core.sphere_stats
core.subdivide_octahedron
core.wavelet
GradientTable
Graph
ResetMixin
OneTimeProperty
Optimizer
SKLearnLinearSolver
NonNegativeLeastSquares
PositiveDefiniteLeastSquares
Profiler
Sphere
HemiSphere
core
Core objects
core.geometry
Utility functions for algebra etc
dict() > new empty dictionary dict(mapping) > new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) > new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) > new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2). 


Spherical to Cartesian coordinates 

Return angles for Cartesian 3D coordinates x, y, and z
See doc for 

Convert spherical coordinates to latitude and longitude. 

Return vector divided by its Euclidean (L2) norm 

Return vector Euclidean (L2) norm 

Rodrigues formula 
Least squares positive semidefinite tensor estimation 


Distance across sphere surface between pts1 and pts2 

Cartesian distance between pts1 and pts2 

Cosine of angle between two (sets of) vectors 

Lambert Equal Area Projection from polar sphere to plane Return positions in (y1,y2) plane corresponding to the points with polar coordinates (theta, phi) on the unit sphere, under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 

Lambert Equal Area Projection from cartesian vector to plane Return positions in \((y_1,y_2)\) plane corresponding to the directions of the vectors with cartesian coordinates xyz under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 

Return homogeneous rotation matrix from Euler angles and axis sequence. 

Return 4x4 transformation matrix from sequence of transformations. 

Return sequence of transformations from transformation matrix. 

a, b and c are 3dimensional vectors which are the vertices of a triangle. 

rotation matrix from 2 unit vectors 

Compose multiple 4x4 affine transformations in one 4x4 matrix 

Computes n evenly spaced perpendicular directions relative to a given vector v 

Calculate the maximal distance from the center to a corner of a voxel, given an affine 

Test whether all points on a unit sphere lie in the same hemisphere. 
core.gradients

Diffusion gradient information 
Instances of the Logger class represent a single logging channel. 


This function gives the unique rounded bvalues of the data dipy.core.gradients.unique_bvals is deprecated, Please use dipy.core.gradients.unique_bvals_magnitude instead * deprecated from version: 1.2 * Raises <class 'dipy.utils.deprecator.ExpiredDeprecationError'> as of version: 1.4 Parameters  bvals : ndarray Array containing the bvalues bmag : int The order of magnitude that the bvalues have to differ to be considered an unique bvalue. 

Creates a GradientTable from a bvals array and a bvecs array 

A general function for creating diffusion MR gradients. 
A general function for creating diffusion MR gradients. 


A general function for creating diffusion MR gradients. 

Reorient the directions in a GradientTable. 

Generates N bvectors. 

"This function rounds the bvalues Parameters  bvals : ndarray Array containing the bvalues bmag : int The order of magnitude to round the bvalues. 

Gives the unique bvalues of the data, within a tolerance gap 

Get indices where the bvalue is bval 

This function gives the unique rounded bvalues of the data Parameters  bvals : ndarray Array containing the bvalues bmag : int The order of magnitude that the bvalues have to differ to be considered an unique bvalue. 

Check if you have enough different bvalues in your gradient table Parameters  gtab : GradientTable class instance. 

Compute trace, anisotropy and asymmetry parameters from btensors. 

Compute btensor from trace, anisotropy and asymmetry parameters. 

Calculate the mapping needing to get from orn1 to orn2. 

Change the orientation of gradients or other vectors. 



Return an array representation of an ornt string. 

Return a string representation of a 3d ornt. 
core.graph
A simple graph class

A simple graph class 
core.histeq

Performs an histogram equalization on 
core.ndindex

An Ndimensional iterator object to index arrays. 
core.onetime
Descriptor support for NIPY.
Copyright (c) 20062011, NIPY Developers All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
 Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
 Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
 Neither the name of the NIPY Developers nor the names of any
contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Utilities to support special Python descriptors [1,2], in particular the use of a useful pattern for properties we call ‘one time properties’. These are object attributes which are declared as properties, but become regular attributes once they’ve been read the first time. They can thus be evaluated later in the object’s life cycle, but once evaluated they become normal, static attributes with no function call overhead on access or any other constraints.
A special ResetMixin class is provided to add a .reset() method to users who may want to have their objects capable of resetting these computed properties to their ‘untriggered’ state.
[1] HowTo Guide for Descriptors, Raymond Hettinger. http://users.rcn.com/python/download/Descriptor.htm
[2] Python data model, http://docs.python.org/reference/datamodel.html
A Mixin class to add a .reset() method to users of OneTimeProperty. 


A descriptor to make special properties that become normal attributes. 

Decorator to create OneTimeProperty attributes. 
core.optimize
A unified interface for performing and debugging optimization problems.



Provide a sklearnlike uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\) Subclasses of SKLearnLinearSolver should provide a 'fit' method that have the following signature: SKLearnLinearSolver.fit(X, y), which would set an attribute SKLearnLinearSolver.coef_, with the shape (X.shape[1],), such that an estimate of y can be calculated as: y_hat = np.dot(X, SKLearnLinearSolver.coef_.T) 

A sklearnlike interface to scipy.optimize.nnls 



The same as np.dot(A, B), except it works even if A or B or both are sparse matrices. 

Solve y=Xh for h, using gradient descent, with X a sparse matrix. 
core.profile
Class for profiling cython code

Profile python/cython files or functions 
core.rng
Random number generation utilities.

Wichmann Hill (2006) random number generator. 

Algorithm AS 183 Appl. 

Return a LEcuyer random number generator. 
core.sphere

Points on the unit sphere. 

Points on the unit sphere. 

Triangulate a set of vertices on the sphere. 

Extract all unique edges from given triangular faces. 

Remove duplicate sets. 

Models electrostatic repulsion on the unit sphere 

Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp. 

Checks the euler characteristic of a sphere If \(f\) = number of faces, \(e\) = number_of_edges and \(v\) = number of vertices, the Euler formula says \(fe+v = 2\) for a mesh on a sphere. 
ndarray(shape, dtype=float, buffer=None, offset=0, 

ndarray(shape, dtype=float, buffer=None, offset=0, 

ndarray(shape, dtype=float, buffer=None, offset=0, 

ndarray(shape, dtype=float, buffer=None, offset=0, 

Points on the unit sphere. 

Points on the unit sphere. 

Points on the unit sphere. 
core.sphere_stats
Statistics on spheres

Random unit vectors from a uniform distribution on the sphere. 

Principal direction and confidence ellipse Implements equations in section 6.3.1(ii) of Fisher, Lewis and Embleton, supplemented by equations in section 3.2.5. 

Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity) 

Computes the cosine distance of the best match between points of two sets of vectors S and T 
core.subdivide_octahedron
Create a unit sphere by subdividing all triangles of an octahedron recursively.
The unit sphere has a radius of 1, which also means that all points in this sphere (assumed to have centre at [0, 0, 0]) have an absolute value (modulus) of 1. Another feature of the unit sphere is that the unit normals of this sphere are exactly the same as the vertices.
This recursive method will avoid the common problem of the polar singularity, produced by 2d (lonlat) parameterization methods.

Creates a unit sphere by subdividing a unit octahedron. 

Creates a unit sphere by subdividing a unit octahedron, returns half the sphere. 
core.wavelet

3D Circular Shift 

Function generating inverse of the permutation 

3D Analysis Filter Bank 

3D Synthesis Filter Bank 

3D Synthesis Filter Bank 

3D Analysis Filter Bank 

3D Discrete Wavelet Transform 

Inverse 3D Discrete Wavelet Transform 
dict() > new empty dictionary dict(mapping) > new dictionary initialized from a mapping object’s
(key, value) pairs
d = {} for k, v in iterable:
d[k] = v
in the keyword argument list. For example: dict(one=1, two=2)
Spherical to Cartesian coordinates
This is the standard physics convention where theta is the inclination (polar) angle, and phi is the azimuth angle.
Imagine a sphere with center (0,0,0). Orient it with the z axis running southnorth, the y axis running westeast and the x axis from posterior to anterior. theta (the inclination angle) is the angle to rotate from the zaxis (the zenith) around the yaxis, towards the x axis. Thus the rotation is counterclockwise from the point of view of positive y. phi (azimuth) gives the angle of rotation around the zaxis towards the y axis. The rotation is counterclockwise from the point of view of positive z.
Equivalently, given a point P on the sphere, with coordinates x, y, z, theta is the angle between P and the zaxis, and phi is the angle between the projection of P onto the XY plane, and the X axis.
Geographical nomenclature designates theta as ‘colatitude’, and phi as ‘longitude’
radius
inclination or polar angle
azimuth angle
x coordinate(s) in Cartesion space
y coordinate(s) in Cartesian space
z coordinate
See these pages:
for excellent discussion of the many different conventions possible. Here we use the physics conventions, used in the wikipedia page.
Derivations of the formulae are simple. Consider a vector x, y, z of length r (norm of x, y, z). The inclination angle (theta) can be found from: cos(theta) == z / r > z == r * cos(theta). This gives the hypotenuse of the projection onto the XY plane, which we will call Q. Q == r*sin(theta). Now x / Q == cos(phi) > x == r * sin(theta) * cos(phi) and so on.
We have deliberately named this function sphere2cart
rather than
sph2cart
to distinguish it from the Matlab function of that
name, because the Matlab function uses an unusual convention for the
angles that we did not want to replicate. The Matlab function is
trivial to implement with the formulae given in the Matlab help.
Return angles for Cartesian 3D coordinates x, y, and z
See doc for sphere2cart
for angle conventions and derivation
of the formulae.
\(0\le\theta\mathrm{(theta)}\le\pi\) and \(\pi\le\phi\mathrm{(phi)}\le\pi\)
Parameters
———
x : array_like
x coordinate in Cartesian space
y : array_like
y coordinate in Cartesian space
z : array_like
z coordinate
Returns
——
r : array
radius
theta : array
inclination (polar) angle
phi : array
azimuth angle
Return vector divided by its Euclidean (L2) norm
See unit vector and Euclidean norm
vec : array_like shape (3,)
vector divided by L2 norm
>>> vec = [1, 2, 3]
>>> l2n = np.sqrt(np.dot(vec, vec))
>>> nvec = normalized_vector(vec)
>>> np.allclose(np.array(vec) / l2n, nvec)
True
>>> vec = np.array([[1, 2, 3]])
>>> vec.shape == (1, 3)
True
>>> normalized_vector(vec).shape == (1, 3)
True
Return vector Euclidean (L2) norm
See unit vector and Euclidean norm
Vectors to norm.
Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.
If True, the output will have the same number of dimensions as vec, with shape 1 on axis.
Euclidean norms of vectors.
>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17., 85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
[ 85.]])
>>> vector_norm(vec, axis=0)
array([ 8., 39., 77.])
Rodrigues formula
Rotation matrix for rotation around axis r for angle theta.
The rotation matrix is given by the Rodrigues formula:
R = Id + sin(theta)*Sn + (1cos(theta))*Sn^2
with:
0 nz ny
Sn = nz 0 nx
ny nx 0
where n = r / r
In case the angle r is very small, the above formula may lead to numerical instabilities. We instead use a Taylor expansion around theta=0:
R = I + sin(theta)/tetha Sr + (1cos(theta))/teta2 Sr^2
leading to:
R = I + (1theta2/6)*Sr + (1/2theta2/24)*Sr^2
r : array_like shape (3,), axis theta : float, angle in degrees
R : array, shape (3,3), rotation matrix
>>> import numpy as np
>>> from dipy.core.geometry import rodrigues_axis_rotation
>>> v=np.array([0,0,1])
>>> u=np.array([1,0,0])
>>> R=rodrigues_axis_rotation(v,40)
>>> ur=np.dot(R,u)
>>> np.round(np.rad2deg(np.arccos(np.dot(ur,u))))
40.0
Least squares positive semidefinite tensor estimation
B matrix  symmetric. We do not check the symmetry.
Estimated nearest positive semidefinite array to matrix B.
>>> B = np.diag([1, 1, 1])
>>> nearest_pos_semi_def(B)
array([[ 0.75, 0. , 0. ],
[ 0. , 0.75, 0. ],
[ 0. , 0. , 0. ]])
Distance across sphere surface between pts1 and pts2
where N is the number of points and R is the number of
coordinates defining a point (R==3
for 3D)
where N is the number of points and R is the number of
coordinates defining a point (R==3
for 3D). It should be
possible to broadcast pts1 against pts2
Radius of sphere. Default is to work out radius from mean of the length of each point vector
If True, check if the points are on the sphere surface  i.e check if the vector lengths in pts1 and pts2 are close to radius. Default is True.
Distances between corresponding points in pts1 and pts2 across the spherical surface, i.e. the great circle distance
cart_distance : cartesian distance between points vector_cosine : cosine of angle between vectors
>>> print('%.4f' % sphere_distance([0,1],[1,0]))
1.5708
>>> print('%.4f' % sphere_distance([0,3],[3,0]))
4.7124
Cartesian distance between pts1 and pts2
If either of pts1 or pts2 is 2D, then we take the first dimension to index points, and the second indexes coordinate. More generally, we take the last dimension to be the coordinate dimension.
where N is the number of points and R is the number of
coordinates defining a point (R==3
for 3D)
where N is the number of points and R is the number of
coordinates defining a point (R==3
for 3D). It should be
possible to broadcast pts1 against pts2
Cartesian distances between corresponding points in pts1 and pts2
sphere_distance : distance between points on sphere surface
>>> cart_distance([0,0,0], [0,0,3])
3.0
Cosine of angle between two (sets of) vectors
The cosine of the angle between two vectors v1
and v2
is
given by the inner product of v1
and v2
divided by the
product of the vector lengths:
v_cos = np.inner(v1, v2) / (np.sqrt(np.sum(v1**2)) *
np.sqrt(np.sum(v2**2)))
N vectors (as rows) or single vector. Vectors have R elements.
N vectors (as rows) or single vector. Vectors have R elements. It should be possible to broadcast vecs1 against vecs2
Vector cosines. To get the angles you will need np.arccos
The vector cosine will be the same as the correlation only if all the input vectors have zero mean.
Lambert Equal Area Projection from polar sphere to plane
Return positions in (y1,y2) plane corresponding to the points
with polar coordinates (theta, phi) on the unit sphere, under the
Lambert Equal Area Projection mapping (see Mardia and Jupp (2000),
Directional Statistics, p. 161).
See doc for sphere2cart
for angle conventions
 \(0 \le \theta \le \pi\) and \(0 \le \phi \le 2 \pi\)
 \((y_1,y_2) \le 2\)
The Lambert EAP maps the upper hemisphere to the planar disc of radius 1
and the lower hemisphere to the planar annulus between radii 1 and 2,
and vice versa.
Parameters
———
theta : array_like
theta spherical coordinates
phi : array_like
phi spherical coordinates
Returns
——
y : (N,2) array
planar coordinates of points following mapping by Lambert’s EAP.
Lambert Equal Area Projection from cartesian vector to plane
Return positions in \((y_1,y_2)\) plane corresponding to the
directions of the vectors with cartesian coordinates xyz under the
Lambert Equal Area Projection mapping (see Mardia and Jupp (2000),
Directional Statistics, p. 161).
The Lambert EAP maps the upper hemisphere to the planar disc of radius 1
and the lower hemisphere to the planar annulus between radii 1 and 2,
The Lambert EAP maps the upper hemisphere to the planar disc of radius 1
and the lower hemisphere to the planar annulus between radii 1 and 2.
and vice versa.
See doc for sphere2cart
for angle conventions
Parameters
———
x : array_like
x coordinate in Cartesion space
y : array_like
y coordinate in Cartesian space
z : array_like
z coordinate
Returns
——
y : (N,2) array
planar coordinates of points following mapping by Lambert’s EAP.
Return homogeneous rotation matrix from Euler angles and axis sequence.
Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
ai, aj, ak : Euler’s roll, pitch and yaw angles axes : One of 24 axis sequences as string or encoded tuple
matrix : ndarray (4, 4)
Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
>>> import numpy
>>> R = euler_matrix(1, 2, 3, 'syxz')
>>> numpy.allclose(numpy.sum(R[0]), 1.34786452)
True
>>> R = euler_matrix(1, 2, 3, (0, 1, 0, 1))
>>> numpy.allclose(numpy.sum(R[0]), 0.383436184)
True
>>> ai, aj, ak = (4.0*math.pi) * (numpy.random.random(3)  0.5)
>>> for axes in _AXES2TUPLE.keys():
... _ = euler_matrix(ai, aj, ak, axes)
>>> for axes in _TUPLE2AXES.keys():
... _ = euler_matrix(ai, aj, ak, axes)
Return 4x4 transformation matrix from sequence of transformations.
Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
This is the inverse of the decompose_matrix
function.
Scaling factors.
Shear factors for xy, xz, yz axes.
Euler angles about static x, y, z axes.
Translation vector along x, y, z axes.
Perspective partition of matrix.
matrix : 4x4 array
>>> import math
>>> import numpy as np
>>> import dipy.core.geometry as gm
>>> scale = np.random.random(3)  0.5
>>> shear = np.random.random(3)  0.5
>>> angles = (np.random.random(3)  0.5) * (2*math.pi)
>>> trans = np.random.random(3)  0.5
>>> persp = np.random.random(4)  0.5
>>> M0 = gm.compose_matrix(scale, shear, angles, trans, persp)
Return sequence of transformations from transformation matrix.
Code modified from the excellent work of Christoph Gohlke link provided here: http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
Nondegenerate homogeneous transformation matrix
Three scaling factors.
Shear factors for xy, xz, yz axes.
Euler angles about static x, y, z axes.
Translation vector along x, y, z axes.
Perspective partition of matrix.
If matrix is of wrong type or degenerate.
>>> import numpy as np
>>> T0=np.diag([2,1,1,1])
>>> scale, shear, angles, trans, persp = decompose_matrix(T0)
a, b and c are 3dimensional vectors which are the vertices of a triangle. The function returns the circumradius of the triangle, i.e the radius of the smallest circle that can contain the triangle. In the degenerate case when the 3 points are collinear it returns half the distance between the furthest apart points.
the three vertices of the triangle
the desired circumradius
rotation matrix from 2 unit vectors
u, v being unit 3d vectors return a 3x3 rotation matrix R than aligns u to v.
In general there are many rotations that will map u to v. If S is any rotation using v as an axis then R.S will also map u to v since (S.R)u = S(Ru) = Sv = v. The rotation R returned by vec2vec_rotmat leaves fixed the perpendicular to the plane spanned by u and v.
The transpose of R will align v to u.
u : array, shape(3,) v : array, shape(3,)
R : array, shape(3,3)
>>> import numpy as np
>>> from dipy.core.geometry import vec2vec_rotmat
>>> u=np.array([1,0,0])
>>> v=np.array([0,1,0])
>>> R=vec2vec_rotmat(u,v)
>>> np.dot(R,u)
array([ 0., 1., 0.])
>>> np.dot(R.T,v)
array([ 1., 0., 0.])
Computes n evenly spaced perpendicular directions relative to a given vector v
Array containing the three cartesian coordinates of vector v
Number of perpendicular directions to generate
If half is True, perpendicular directions are sampled on half of the unit circumference perpendicular to v, otherwive perpendicular directions are sampled on the full circumference. Default of half is False
array of vectors perpendicular to v
Perpendicular directions are estimated using the following two step procedure:
1) the perpendicular directions are first sampled in a unit circumference parallel to the plane normal to the xaxis.
2) Samples are then rotated and aligned to the plane normal to vector v. The rotational matrix for this rotation is constructed as reference frame basis which axis are the following:
The first axis is vector v
The second axis is defined as the normalized vector given by the
cross product between vector v and the unit vector aligned to the xaxis  The third axis is defined as the cross product between the previous computed vector and vector v.
Following this two steps, coordinates of the final perpendicular directions are given as:
This procedure has a singularity when vector v is aligned to the xaxis. To solve this singularity, perpendicular directions in procedure’s step 1 are defined in the plane normal to yaxis and the second axis of the rotated frame of reference is computed as the normalized vector given by the cross product between vector v and the unit vector aligned to the yaxis. Following this, the coordinates of the perpendicular directions are given as:
left [ frac{left (v_{x}v_{y}sin(a_{i})+v_{z}cos(a_{i}) right )} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} ; , ; sin(a_{i}) sqrt{{v_{x}}^{2}+{v_{z}}^{2}} ; , ; frac{v_{y}v_{z}sin(a_{i})+v_{x}cos(a_{i})} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} right ]
For more details on this calculation, see ` here <http://gsoc2015dipydki.blogspot.it/2015/07/rnhpost8computingperpendicular.html>`_.
Calculate the maximal distance from the center to a corner of a voxel, given an affine
The spatial transformation from the measurement to the scanner space.
The maximal distance to the corner of a voxel, given voxel size encoded in the affine.
Test whether all points on a unit sphere lie in the same hemisphere.
2D numpy array with shape (N, 3) where N is the number of points. All points must lie on the unit sphere.
If True, one can find a hemisphere that contains all the points. If False, then the points do not lie in any hemisphere
If is_hemi == True, then pole is the “central” pole of the input vectors. Otherwise, pole is the zero vector.
https://rstudiopubsstatic.s3.amazonaws.com/27121_a22e51b47c544980bad594d5e0bb2d04.html # noqa
GradientTable
Bases: object
Diffusion gradient information
Diffusion gradients. The direction of each of these vectors corresponds to the bvector, and the length corresponds to the bvalue.
Gradients with bvalue less than or equal to b0_threshold are considered as b0s i.e. without diffusion weighting.
diffusion gradients
The bvalue, or magnitude, of each gradient direction.
The qvalue for each gradient direction. Needs big and small delta.
The direction, represented as a unit vector, of each gradient.
Boolean array indicating which gradients have no diffusion weighting, ie bvalue is close to 0.
Gradients with bvalue less than or equal to b0_threshold are considered to not have diffusion weighting.
The btensor of each gradient direction.
gradient_table
The GradientTable object is immutable. Do NOT assign attributes. If you have your gradient table in a bval & bvec format, we recommend using the factory function gradient_table
Instances of the Logger class represent a single logging channel. A “logging channel” indicates an area of an application. Exactly how an “area” is defined is up to the application developer. Since an application can have any number of areas, logging channels are identified by a unique string. Application areas can be nested (e.g. an area of “input processing” might include subareas “read CSV files”, “read XLS files” and “read Gnumeric files”). To cater for this natural nesting, channel names are organized into a namespace hierarchy where levels are separated by periods, much like the Java or Python package namespace. So in the instance given above, channel names might be “input” for the upper level, and “input.csv”, “input.xls” and “input.gnu” for the sublevels. There is no arbitrary limit to the depth of nesting.
This function gives the unique rounded bvalues of the data dipy.core.gradients.unique_bvals is deprecated, Please use dipy.core.gradients.unique_bvals_magnitude instead * deprecated from version: 1.2 * Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.4 Parameters ——— bvals : ndarray Array containing the bvalues bmag : int The order of magnitude that the bvalues have to differ to be considered an unique bvalue. Bvalues are also rounded up to this order of magnitude. Default: derive this value from the maximal bvalue provided: \(bmag=log_{10}(max(bvals))  1\). rbvals : bool, optional If True function also returns all individual rounded bvalues. Default: False Returns —— ubvals : ndarray Array containing the rounded unique bvalues
Creates a GradientTable from a bvals array and a bvecs array
The bvalue, or magnitude, of each gradient direction.
The direction, represented as a unit vector, of each gradient.
Gradients with bvalue less than or equal to bo_threshold are considered to not have diffusion weighting.
Each vector in bvecs must be a unit vectors up to a tolerance of atol.
a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigarshaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the bvalue.
an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigarshaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the bvalue.
an array of shape (N,3,3) specifying the btensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.
Other keyword inputs are passed to GradientTable.
A GradientTable with all the gradient information.
GradientTable, gradient_table
A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the bvalues and bvectors so that they can be useful during the reconstruction process.
qvalue given in 1/mm
bvecs : can be any of two options
an array of shape (N, 3) or (3, N) with the bvectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds
acquisition pulse duration time in seconds
All bvalues with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All bvectors need to be unit vectors up to a tolerance.
A GradientTable with all the gradient information.
>>> from dipy.core.gradients import gradient_table_from_qvals_bvecs
>>> qvals = 30. * np.ones(7)
>>> big_delta = .03 # pulse separation of 30ms
>>> small_delta = 0.01 # pulse duration of 10ms
>>> qvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table_from_qvals_bvecs(qvals, bvecs,
... big_delta, small_delta)
Often b0s (bvalues which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of bvalues is 7.
Bvectors should be unit vectors.
A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the bvalues and bvectors so that they can be useful during the reconstruction process.
gradient strength given in T/mm
bvecs : can be any of two options
an array of shape (N, 3) or (3, N) with the bvectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds
acquisition pulse duration time in seconds
All bvalues with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All bvectors need to be unit vectors up to a tolerance.
A GradientTable with all the gradient information.
>>> from dipy.core.gradients import (
... gradient_table_from_gradient_strength_bvecs)
>>> gradient_strength = .03e3 * np.ones(7) # clinical strength at 30 mT/m
>>> big_delta = .03 # pulse separation of 30ms
>>> small_delta = 0.01 # pulse duration of 10ms
>>> gradient_strength[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table_from_gradient_strength_bvecs(
... gradient_strength, bvecs, big_delta, small_delta)
Often b0s (bvalues which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of bvalues is 7.
Bvectors should be unit vectors.
A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the bvalues and bvectors so that they can be useful during the reconstruction process.
bvals : can be any of the four options
an array of shape (N,) or (1, N) or (N, 1) with the bvalues.
a path for the file which contains an array like the above (1).
an array of shape (N, 4) or (4, N). Then this parameter is considered to be a btable which contains both bvals and bvecs. In this case the next parameter is skipped.
a path for the file which contains an array like the one at (3).
bvecs : can be any of two options
an array of shape (N, 3) or (3, N) with the bvectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds (default None)
acquisition pulse duration time in seconds (default None)
All bvalues with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All bvectors need to be unit vectors up to a tolerance.
btens : can be any of three options
a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigarshaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the bvalue.
an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigarshaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the bvalue.
an array of shape (N,3,3) specifying the btensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.
A GradientTable with all the gradient information.
>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False
Often b0s (bvalues which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of bvalues is 7.
Bvectors should be unit vectors.
Reorient the directions in a GradientTable.
When correcting for motion, rotation of the diffusionweighted volumes might cause systematic bias in rotationally invariant measures, such as FA and MD, and also cause characteristic biases in tractography, unless the gradient directions are appropriately reoriented to compensate for this effect [Leemans2009].
The nominal gradient table with which the data were acquired.
Each entry in this list or array contain either an affine transformation (4,4) or a rotation matrix (3, 3). In both cases, the transformations encode the rotation that was applied to the image corresponding to one of the nonzero gradient directions (ordered according to their order in gtab.bvecs[~gtab.b0s_mask])
atol: see gradient_table()
gtab : a GradientTable class instance with the reoriented directions
The BMatrix Must Be Rotated When Correcting for Subject Motion in DTI Data. Leemans, A. and Jones, D.K. (2009). MRM, 61: 13361349
Generates N bvectors.
Uses dipy.core.sphere.disperse_charges to model electrostatic repulsion on a unit sphere.
The number of bvectors to generate. This should be equal to the number of bvals used.
Number of iterations to run.
The generated directions, represented as a unit vector, of each gradient.
“This function rounds the bvalues Parameters ——— bvals : ndarray Array containing the bvalues bmag : int The order of magnitude to round the bvalues. If not given bvalues will be rounded relative to the order of magnitude \(bmag = (bmagmax  1)\), where bmaxmag is the magnitude order of the larger bvalue. Returns —— rbvals : ndarray Array containing the rounded bvalues
Gives the unique bvalues of the data, within a tolerance gap
The bvalues must be regrouped in clusters easily separated by a distance greater than the tolerance gap. If all the bvalues of a cluster fit within the tolerance gap, the highest bvalue is kept.
Array containing the bvalues
The tolerated gap between the bvalues to extract and the actual bvalues.
Array containing the unique bvalues using the median value for each cluster
Get indices where the bvalue is bval
Array containing the bvalues
bvalue to extract indices
The tolerated gap between the bvalues to extract and the actual bvalues.
Array of indices where the bvalue is bval
This function gives the unique rounded bvalues of the data Parameters ——— bvals : ndarray Array containing the bvalues bmag : int The order of magnitude that the bvalues have to differ to be considered an unique bvalue. Bvalues are also rounded up to this order of magnitude. Default: derive this value from the maximal bvalue provided: \(bmag=log_{10}(max(bvals))  1\). rbvals : bool, optional If True function also returns all individual rounded bvalues. Default: False Returns —— ubvals : ndarray Array containing the rounded unique bvalues
Check if you have enough different bvalues in your gradient table Parameters ——— gtab : GradientTable class instance. n_bvals : int The number of different bvalues you are checking for. non_zero : bool Whether to check only nonzero bvalues. In this case, we will require at least n_bvals nonzero bvalues (where nonzero is defined depending on the gtab object’s b0_threshold attribute) bmag : int The order of magnitude of the bvalues used. The function will normalize the bvalues relative \(10^{bmag}\). Default: derive this value from the maximal bvalue provided: \(bmag=log_{10}(max(bvals))  1\). Returns —— bool : Whether there are at least n_bvals different bvalues in the gradient table used.
Compute trace, anisotropy and asymmetry parameters from btensors.
input btensor, or btensors, where N = number of btensors
Any parameters smaller than this value are considered to be 0
bvalue(s) (trace(s))
normalized tensor anisotropy(s)
tensor asymmetry(s)
This function can be used to get btensor parameters directly from the GradientTable btens attribute.
>>> lte = np.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]])
>>> bval, bdelta, b_eta = btens_to_params(lte)
>>> print("bval={}; bdelta={}; b_eta={}".format(bdelta, bval, b_eta))
bval=[ 1.]; bdelta=[ 1.]; b_eta=[ 0.]
Compute btensor from trace, anisotropy and asymmetry parameters.
bvalue (>= 0)
normalized tensor anisotropy (>= 0.5 and <= 1)
tensor asymmetry (>= 0 and <= 1)
output btensor
Implements eq. 7.11. p. 231 in [1].
Topgaard, NMR methods for studying microscopic diffusion
anisotropy, in: R. Valiullin (Ed.), Diffusion NMR of Confined Systems: Fluid Transport in Porous Solids and Heterogeneous Materials, Royal Society of Chemistry, Cambridge, UK, 2016.
Change the orientation of gradients or other vectors.
Moves vectors, storted along axis, from current_ornt to new_ornt. For example the vector [x, y, z] in “RAS” will be [x, y, z] in “LPS”.
R: Right A: Anterior S: Superior L: Left P: Posterior I: Inferior
Graph
Bases: object
A simple graph class
A graph class with nodes and edges :)
This class allows us to:
find the shortest path
find all paths
add/delete nodes and edges
get parent & children nodes
>>> from dipy.core.graph import Graph
>>> g=Graph()
>>> g.add_node('a',5)
>>> g.add_node('b',6)
>>> g.add_node('c',10)
>>> g.add_node('d',11)
>>> g.add_edge('a','b')
>>> g.add_edge('b','c')
>>> g.add_edge('c','d')
>>> g.add_edge('b','d')
>>> g.up_short('d')
['d', 'b', 'a']
Performs an histogram equalization on arr
.
This was taken from:
http://www.janeriksolem.net/2009/06/histogramequalizationwithpythonand.html
Image on which to perform histogram equalization.
Number of bins used to construct the histogram.
Histogram equalized image.
An Ndimensional iterator object to index arrays.
Given the shape of an array, an ndindex instance iterates over the Ndimensional index of the array. At each iteration a tuple of indices is returned; the last dimension is iterated over first.
The dimensions of the array.
>>> from dipy.core.ndindex import ndindex
>>> shape = (3, 2, 1)
>>> for index in ndindex(shape):
... print(index)
(0, 0, 0)
(0, 1, 0)
(1, 0, 0)
(1, 1, 0)
(2, 0, 0)
(2, 1, 0)
ResetMixin
Bases: object
A Mixin class to add a .reset() method to users of OneTimeProperty.
By default, auto attributes once computed, become static. If they happen to depend on other parts of an object and those parts change, their values may now be invalid.
This class offers a .reset() method that users can call explicitly when they know the state of their objects may have changed and they want to ensure that all their special attributes should be invalidated. Once reset() is called, all their auto attributes are reset to their OneTimeProperty descriptors, and their accessor functions will be triggered again.
Warning
If a class has a set of attributes that are OneTimeProperty, but that can be initialized from any one of them, do NOT use this mixin! For instance, UniformTimeSeries can be initialized with only sampling_rate and t0, sampling_interval and time are autocomputed. But if you were to reset() a UniformTimeSeries, it would lose all 4, and there would be then no way to break the circular dependency chains.
If this becomes a problem in practice (for our analyzer objects it isn’t, as they don’t have the above pattern), we can extend reset() to check for a _no_reset set of names in the instance which are meant to be kept protected. But for now this is NOT done, so caveat emptor.
>>> class A(ResetMixin):
... def __init__(self,x=1.0):
... self.x = x
...
... @auto_attr
... def y(self):
... print('*** y computation executed ***')
... return self.x / 2.0
...
>>> a = A(10)
About to access y twice, the second time no computation is done: >>> a.y * y computation executed * 5.0 >>> a.y 5.0
Changing x >>> a.x = 20
a.y doesn’t change to 10, since it is a static attribute: >>> a.y 5.0
We now reset a, and this will then force all auto attributes to recompute the next time we access them: >>> a.reset()
About to access y twice again after reset(): >>> a.y * y computation executed * 10.0 >>> a.y 10.0
OneTimeProperty
Bases: object
A descriptor to make special properties that become normal attributes.
This is meant to be used mostly by the auto_attr decorator in this module.
Decorator to create OneTimeProperty attributes.
 funcmethod
The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.
>>> class MagicProp(object):
... @auto_attr
... def a(self):
... return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True
Optimizer
Bases: object
A class for handling minimization of scalar function of one or more variables.
Objective function.
Initial guess.
Extra arguments passed to the objective function and its derivatives (Jacobian, Hessian).
Type of solver. Should be one of
‘NelderMead’
‘Powell’
‘CG’
‘BFGS’
‘NewtonCG’
‘Anneal’
‘LBFGSB’
‘TNC’
‘COBYLA’
‘SLSQP’
‘dogleg’
‘trustncg’
Jacobian of objective function. Only for CG, BFGS, NewtonCG, dogleg, trustncg. If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of the objective. In this case, it must accept the same arguments as fun.
Hessian of objective function or Hessian of objective function times an arbitrary vector p. Only for NewtonCG, dogleg, trustncg. Only one of hessp or hess needs to be given. If hess is provided, then hessp will be ignored. If neither hess nor hessp is provided, then the hessian product will be approximated using finite differences on jac. hessp must compute the Hessian times an arbitrary vector.
Bounds for variables (only for LBFGSB, TNC and SLSQP).
(min, max)
pairs for each element in x
, defining
the bounds on that parameter. Use None for one of min
or
max
when there is no bound in that direction.
Constraints definition (only for COBYLA and SLSQP). Each constraint is defined in a dictionary with fields:
 typestr
Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.
 funcallable
The function defining the constraint.
 jaccallable, optional
The Jacobian of fun (only for SLSQP).
 argssequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be nonnegative. Note that COBYLA only supports inequality constraints.
Tolerance for termination. For detailed control, use solverspecific options.
Called after each iteration, as callback(xk)
, where xk
is
the current parameter vector. Only available using Scipy >= 0.12.
A dictionary of solver options. All methods accept the following generic options:
 maxiterint
Maximum number of iterations to perform.
 dispbool
Set to True to print convergence messages.
For methodspecific options, see show_options(‘minimize’, method).
save history of x for each iteration. Only available using Scipy >= 0.12.
scipy.optimize.minimize
SKLearnLinearSolver
Bases: object
Provide a sklearnlike uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\) Subclasses of SKLearnLinearSolver should provide a ‘fit’ method that have the following signature: SKLearnLinearSolver.fit(X, y), which would set an attribute SKLearnLinearSolver.coef_, with the shape (X.shape[1],), such that an estimate of y can be calculated as: y_hat = np.dot(X, SKLearnLinearSolver.coef_.T)
NonNegativeLeastSquares
Bases: SKLearnLinearSolver
A sklearnlike interface to scipy.optimize.nnls
PositiveDefiniteLeastSquares
Bases: object
Regularized least squares with linear matrix inequality constraints Generate a CVXPY representation of a regularized least squares optimization problem subject to linear matrix inequality constraints. Parameters ——— m : int Positive int indicating the number of regressors. A : array (t = m + k + 1, p, p) (optional) Constraint matrices \(A\). L : array (m, m) (optional) Regularization matrix \(L\). Default: None. Notes —– The basic problem is to solve for \(h\) the minimization of \(c=\X h  y\^2 + \L h\^2\), where \(X\) is an (m, m) upper triangular design matrix and \(y\) is a set of m measurements, subject to the constraint that \(M=A_0+\sum_{i=0}^{m1} h_i A_{i+1}+\sum_{j=0}^{k1} s_j A_{m+j+1}>0\), where \(s_j\) are slack variables and where the inequality sign denotes positive definiteness of the matrix \(M\). The sparsity pattern and size of \(X\) and \(y\) are fixed, because every design matrix and set of measurements can be reduced to an equivalent (minimal) formulation of this type. This formulation is used here mainly to enforce polynomial sumofsquares constraints on various models, as described in [1]_. References ——— .. [1] Dela Haije et al. “Enforcing necessary nonnegativity constraints for common diffusion MRI models using sum of squares programming”. NeuroImage 209, 2020, 116405.
Solve CVXPY problem Solve a CVXPY problem instance for a given design matrix and a given set of observations, and return the optimum. Parameters ——— design_matrix : array (n, m) Design matrix. measurements : array (n) Measurements. check : boolean (optional) If True check whether the unconstrained optimization solution already satisfies the constraints, before running the constrained optimization. This adds overhead, but can avoid unnecessary constrained optimization calls. Default: False kwargs : keyword arguments Arguments passed to the CVXPY solve method. Returns —— h : array (m) Estimated optimum for problem variables \(h\).
The same as np.dot(A, B), except it works even if A or B or both are sparse matrices.
A, B : arrays of shape (m, n), (n, k)
The matrix product AB. If both A and B are sparse, the result will be a sparse matrix. Otherwise, a dense result is returned
See discussion here: http://mail.scipy.org/pipermail/scipyuser/2010November/027700.html
Solve y=Xh for h, using gradient descent, with X a sparse matrix.
The data. Needs to be dense.
The regressors
The persistence of the gradient.
The increment of parameter update in each iteration
Whether to enforce nonnegativity of the solution.
How many rounds to run between error evaluation for convergencechecking.
Don’t check errors more than this number of times if no improvement in rsquared is seen.
a percentage improvement in SSE that is required each time to say that things are still going well.
h_best : The best estimate of the parameters.
Profiler
Bases: object
Profile python/cython files or functions
If you are profiling cython code you need to add # cython: profile=True on the top of your .pyx file
and for the functions that you do not want to profile you can use this decorator in your cython files
@cython.profile(False)
caller : file or function call args : function arguments
stats : function, stats.print_stats(10) will prin the 10 slower functions
from dipy.core.profile import Profiler import numpy as np p=Profiler(np.sum,np.random.rand(1000000,3)) fname=’test.py’ p=Profiler(fname) p.print_stats(10) p.print_stats(‘det’)
http://docs.cython.org/src/tutorial/profiling_tutorial.html http://docs.python.org/library/profile.html http://packages.python.org/line_profiler/
Print stats for profiling
You can use it in all different ways developed in pstats for example print_stats(10) will give you the 10 slowest calls or print_stats(‘function_name’) will give you the stats for all the calls with name ‘function_name’
N : stats.print_stats argument
Wichmann Hill (2006) random number generator.
B.A. Wichmann, I.D. Hill, Generating good pseudorandom numbers, Computational Statistics & Data Analysis, Volume 51, Issue 3, 1 December 2006, Pages 16141622, ISSN 01679473, DOI: 10.1016/j.csda.2006.05.019. (http://www.sciencedirect.com/science/article/B6V8V4K7F86W2/2/a3a33291b8264e4c882a8f21b6e43351) for advice on generating many sequences for use together, and on alternative algorithms and codes
First seed value. Should not be null. (default 100001)
Second seed value. Should not be null. (default 200002)
Third seed value. Should not be null. (default 300003)
Fourth seed value. Should not be null. (default 400004)
pseudorandom number uniformly distributed between [01]
>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill2006() for i in range(N)]
Algorithm AS 183 Appl. Statist. (1982) vol.31, no.2.
Returns a pseudorandom number rectangularly distributed between 0 and 1. The cycle length is 6.95E+12 (See page 123 of Applied Statistics (1984) vol.33), not as claimed in the original article.
ix, iy and iz should be set to integer values between 1 and 30000 before the first entry.
Integer arithmetic up to 5212632 is required.
First seed value. Should not be null. (default 100001)
Second seed value. Should not be null. (default 200002)
Third seed value. Should not be null. (default 300003)
pseudorandom number uniformly distributed between [01]
>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill1982() for i in range(N)]
Return a LEcuyer random number generator.
Generate uniformly distributed random numbers using the 32bit generator from figure 3 of:
L’Ecuyer, P. Efficient and portable combined random number generators, C.A.C.M., vol. 31, 742749 & 774?, June 1988.
The cycle length is claimed to be 2.30584E+18
First seed value. Should not be null. (default 100001)
Second seed value. Should not be null. (default 200002)
pseudorandom number uniformly distributed between [01]
>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.LEcuyer() for i in range(N)]
Sphere
Bases: object
Points on the unit sphere.
The sphere can be constructed using one of three conventions:
Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Find the index of the vertex in the Sphere closest to the input vector
A unit vector
The index into the Sphere.vertices array that gives the closest vertex (in angle).
Subdivides each face of the sphere into four new faces.
New vertices are created at a, b, and c. Then each face [x, y, z] is divided into faces [x, a, c], [y, a, b], [z, b, c], and [a, b, c].
y
/\
/ \
a/____\b
/\ /\
/ \ / \
/____\/____\
x c z
The number of subdivisions to perform.
The subdivided sphere.
HemiSphere
Bases: Sphere
Points on the unit sphere.
A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.
The HemiSphere can be constructed using one of three conventions:
HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.
Sphere
Create a HemiSphere from points
Extract all unique edges from given triangular faces.
Vertex indices forming triangular faces.
If true, a mapping to the edges of each face is returned.
Unique edges.
For each face, [x, y, z], a mapping to it’s edges [a, b, c].
y
/ / a/
/ / /__________ x c z
Remove duplicate sets.
N sets of size k.
If True, also returns the indices of unique_sets that can be used to reconstruct sets (the original ordering of each set may not be preserved).
Unique sets.
The indices to reconstruct sets from unique_sets.
Models electrostatic repulsion on the unit sphere
Places charges on a sphere and simulates the repulsive forces felt by each one. Allows the charges to move for some number of iterations and returns their final location as well as the total potential of the system at each step.
Points on a unit sphere.
Number of iterations to run.
Using a smaller const could provide a more accurate result, but will need more iterations to converge.
Distributed points on a unit sphere.
The electrostatic potential at each iteration. This can be useful to check if the repulsion converged to a minimum.
This function is meant to be used with diffusion imaging so antipodal symmetry is assumed. Therefore, each charge must not only be unique, but if there is a charge at +x, there cannot be a charge at x. These are treated as the same location and because the distance between the two charges will be zero, the result will be unstable.
Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp.
Points on a unit sphere.
Number of iterations to run.
Tolerance for the optimization.
Distributed points on a unit sphere.
Checks the euler characteristic of a sphere If \(f\) = number of faces, \(e\) = number_of_edges and \(v\) = number of vertices, the Euler formula says \(fe+v = 2\) for a mesh on a sphere. More generally, whether \(f e + v == \chi\) where \(\chi\) is the Euler characteristic of the mesh.  Open chain (track) has \(\chi=1\)  Closed chain (loop) has \(\chi=0\)  Disk has \(\chi=1\)  Sphere has \(\chi=2\)  HemiSphere has \(\chi=1\) Parameters ——— sphere : Sphere A Sphere instance with vertices, edges and faces attributes. chi : int, optional The Euler characteristic of the mesh to be checked Returns —— check : bool True if the mesh has Euler characteristic \(\chi\) Examples ——– >>> euler_characteristic_check(unit_octahedron) True >>> hemisphere = HemiSphere.from_sphere(unit_icosahedron) >>> euler_characteristic_check(hemisphere, chi=1) True
strides=None, order=None)
An array object represents a multidimensional, homogeneous array of fixedsize items. An associated datatype object describes the format of each element in the array (its byteorder, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)
Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a lowlevel method (ndarray(…)) for instantiating an array.
For more information, refer to the numpy module and examine the methods and attributes of an array.
(for the __new__ method; see Notes below)
Shape of created array.
Any object that can be interpreted as a numpy data type.
Used to fill the array with data.
Offset of array data in buffer.
Strides of data in memory.
Rowmajor (Cstyle) or columnmajor (Fortranstyle) order.
Transpose of the array.
The array’s elements, in memory.
Describes the format of the elements in the array.
Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.
Flattened version of the array as an iterator. The iterator
allows assignments, e.g., x.flat = 3
(See ndarray.flat for
assignment examples; TODO).
Imaginary part of the array.
Real part of the array.
Number of elements in the array.
The memory use of each array element in bytes.
The total number of bytes required to store the array data,
i.e., itemsize * size
.
The array’s number of dimensions.
Shape of the array.
The stepsize required to move from one element to the next in
memory. For example, a contiguous (3, 4)
array of type
int16
in Corder has strides (8, 2)
. This implies that
to move from element to element in memory requires jumps of 2 bytes.
To move from rowtorow, one needs to jump 8 bytes at a time
(2 * 4
).
Class containing properties of the array needed for interaction with ctypes.
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.
array : Construct an array. zeros : Create an array, each element of which is zero. empty : Create an array, but leave its allocated memory unchanged (i.e.,
it contains “garbage”).
dtype : Create a datatype. numpy.typing.NDArray : An ndarray alias generic
w.r.t. its dtype.type <numpy.dtype.type>.
There are two modes of creating an array using __new__
:
If buffer is None, then only shape, dtype, and order are used.
If buffer is an object exposing the buffer interface, then all keywords are interpreted.
No __init__
method is needed because the array is fully initialized
after the __new__
method.
These examples illustrate the lowlevel ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.
First mode, buffer is None:
>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e323]])
Second mode:
>>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
strides=None, order=None)
An array object represents a multidimensional, homogeneous array of fixedsize items. An associated datatype object describes the format of each element in the array (its byteorder, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)
Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a lowlevel method (ndarray(…)) for instantiating an array.
For more information, refer to the numpy module and examine the methods and attributes of an array.
(for the __new__ method; see Notes below)
Shape of created array.
Any object that can be interpreted as a numpy data type.
Used to fill the array with data.
Offset of array data in buffer.
Strides of data in memory.
Rowmajor (Cstyle) or columnmajor (Fortranstyle) order.
Transpose of the array.
The array’s elements, in memory.
Describes the format of the elements in the array.
Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.
Flattened version of the array as an iterator. The iterator
allows assignments, e.g., x.flat = 3
(See ndarray.flat for
assignment examples; TODO).
Imaginary part of the array.
Real part of the array.
Number of elements in the array.
The memory use of each array element in bytes.
The total number of bytes required to store the array data,
i.e., itemsize * size
.
The array’s number of dimensions.
Shape of the array.
The stepsize required to move from one element to the next in
memory. For example, a contiguous (3, 4)
array of type
int16
in Corder has strides (8, 2)
. This implies that
to move from element to element in memory requires jumps of 2 bytes.
To move from rowtorow, one needs to jump 8 bytes at a time
(2 * 4
).
Class containing properties of the array needed for interaction with ctypes.
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.
array : Construct an array. zeros : Create an array, each element of which is zero. empty : Create an array, but leave its allocated memory unchanged (i.e.,
it contains “garbage”).
dtype : Create a datatype. numpy.typing.NDArray : An ndarray alias generic
w.r.t. its dtype.type <numpy.dtype.type>.
There are two modes of creating an array using __new__
:
If buffer is None, then only shape, dtype, and order are used.
If buffer is an object exposing the buffer interface, then all keywords are interpreted.
No __init__
method is needed because the array is fully initialized
after the __new__
method.
These examples illustrate the lowlevel ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.
First mode, buffer is None:
>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e323]])
Second mode:
>>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
strides=None, order=None)
An array object represents a multidimensional, homogeneous array of fixedsize items. An associated datatype object describes the format of each element in the array (its byteorder, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)
Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a lowlevel method (ndarray(…)) for instantiating an array.
For more information, refer to the numpy module and examine the methods and attributes of an array.
(for the __new__ method; see Notes below)
Shape of created array.
Any object that can be interpreted as a numpy data type.
Used to fill the array with data.
Offset of array data in buffer.
Strides of data in memory.
Rowmajor (Cstyle) or columnmajor (Fortranstyle) order.
Transpose of the array.
The array’s elements, in memory.
Describes the format of the elements in the array.
Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.
Flattened version of the array as an iterator. The iterator
allows assignments, e.g., x.flat = 3
(See ndarray.flat for
assignment examples; TODO).
Imaginary part of the array.
Real part of the array.
Number of elements in the array.
The memory use of each array element in bytes.
The total number of bytes required to store the array data,
i.e., itemsize * size
.
The array’s number of dimensions.
Shape of the array.
The stepsize required to move from one element to the next in
memory. For example, a contiguous (3, 4)
array of type
int16
in Corder has strides (8, 2)
. This implies that
to move from element to element in memory requires jumps of 2 bytes.
To move from rowtorow, one needs to jump 8 bytes at a time
(2 * 4
).
Class containing properties of the array needed for interaction with ctypes.
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.
array : Construct an array. zeros : Create an array, each element of which is zero. empty : Create an array, but leave its allocated memory unchanged (i.e.,
it contains “garbage”).
dtype : Create a datatype. numpy.typing.NDArray : An ndarray alias generic
w.r.t. its dtype.type <numpy.dtype.type>.
There are two modes of creating an array using __new__
:
If buffer is None, then only shape, dtype, and order are used.
If buffer is an object exposing the buffer interface, then all keywords are interpreted.
No __init__
method is needed because the array is fully initialized
after the __new__
method.
These examples illustrate the lowlevel ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.
First mode, buffer is None:
>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e323]])
Second mode:
>>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
strides=None, order=None)
An array object represents a multidimensional, homogeneous array of fixedsize items. An associated datatype object describes the format of each element in the array (its byteorder, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)
Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a lowlevel method (ndarray(…)) for instantiating an array.
For more information, refer to the numpy module and examine the methods and attributes of an array.
(for the __new__ method; see Notes below)
Shape of created array.
Any object that can be interpreted as a numpy data type.
Used to fill the array with data.
Offset of array data in buffer.
Strides of data in memory.
Rowmajor (Cstyle) or columnmajor (Fortranstyle) order.
Transpose of the array.
The array’s elements, in memory.
Describes the format of the elements in the array.
Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.
Flattened version of the array as an iterator. The iterator
allows assignments, e.g., x.flat = 3
(See ndarray.flat for
assignment examples; TODO).
Imaginary part of the array.
Real part of the array.
Number of elements in the array.
The memory use of each array element in bytes.
The total number of bytes required to store the array data,
i.e., itemsize * size
.
The array’s number of dimensions.
Shape of the array.
The stepsize required to move from one element to the next in
memory. For example, a contiguous (3, 4)
array of type
int16
in Corder has strides (8, 2)
. This implies that
to move from element to element in memory requires jumps of 2 bytes.
To move from rowtorow, one needs to jump 8 bytes at a time
(2 * 4
).
Class containing properties of the array needed for interaction with ctypes.
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.
array : Construct an array. zeros : Create an array, each element of which is zero. empty : Create an array, but leave its allocated memory unchanged (i.e.,
it contains “garbage”).
dtype : Create a datatype. numpy.typing.NDArray : An ndarray alias generic
w.r.t. its dtype.type <numpy.dtype.type>.
There are two modes of creating an array using __new__
:
If buffer is None, then only shape, dtype, and order are used.
If buffer is an object exposing the buffer interface, then all keywords are interpreted.
No __init__
method is needed because the array is fully initialized
after the __new__
method.
These examples illustrate the lowlevel ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.
First mode, buffer is None:
>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e323]])
Second mode:
>>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
Points on the unit sphere.
The sphere can be constructed using one of three conventions:
Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Points on the unit sphere.
The sphere can be constructed using one of three conventions:
Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Points on the unit sphere.
A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.
The HemiSphere can be constructed using one of three conventions:
HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.
Sphere
Random unit vectors from a uniform distribution on the sphere. Parameters ——— n : int Number of random vectors coords : {‘xyz’, ‘radians’, ‘degrees’} ‘xyz’ for cartesian form ‘radians’ for spherical form in rads ‘degrees’ for spherical form in degrees Notes —– The uniform distribution on the sphere, parameterized by spherical coordinates \((\theta, \phi)\), should verify \(\phi\sim U[0,2\pi]\), while \(z=\cos(\theta)\sim U[1,1]\). References ——— .. [1] http://mathworld.wolfram.com/SpherePointPicking.html. Returns —— X : array, shape (n,3) if coords=’xyz’ or shape (n,2) otherwise Uniformly distributed vectors on the unit sphere. Examples ——– >>> from dipy.core.sphere_stats import random_uniform_on_sphere >>> X = random_uniform_on_sphere(4, ‘radians’) >>> X.shape == (4, 2) True >>> X = random_uniform_on_sphere(4, ‘xyz’) >>> X.shape == (4, 3) True
Principal direction and confidence ellipse Implements equations in section 6.3.1(ii) of Fisher, Lewis and Embleton, supplemented by equations in section 3.2.5. Parameters ——— points : array_like (N,3) array of points on the sphere of radius 1 in \(\mathbb{R}^3\) alpha : real or None 1 minus the coverage for the confidence ellipsoid, e.g. 0.05 for 95% coverage. Returns —— centre : vector (3,) centre of ellipsoid b1 : vector (2,) lengths of semiaxes of ellipsoid
Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity)
First set of vectors.
Second set of vectors.
Maximum mean cosine distance.
>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
Computes the cosine distance of the best match between points of two sets of vectors S and T
S : array, shape (m,d) T : array, shape (n,d)
max_cosine_distance:float
>>> import numpy as np
>>> from dipy.core.sphere_stats import angular_similarity
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> angular_similarity(S,T)
3.0
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,np.sqrt(2)/2.,np.sqrt(2)/2.],[0,0,1]])
>>> angular_similarity(S,T)
2.7071067811865475
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0]])
>>> angular_similarity(S,T)
1.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,0,1]])
>>> angular_similarity(S,T)
0.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
Now we use print
to reduce the precision of of the printed output
(so the doctests don’t detect unimportant differences)
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0],[0,0,1]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
Creates a unit sphere by subdividing a unit octahedron. Starts with a unit octahedron and subdivides the faces, projecting the resulting points onto the surface of a unit sphere. Parameters ——— recursion_level : int Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \(4^recursion_level+2\) vertices. Returns —— Sphere : The unit sphere. See Also ——– create_unit_hemisphere, Sphere
Creates a unit sphere by subdividing a unit octahedron, returns half the sphere. Parameters ——— recursion_level : int Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \((4^recursion_level+2)/2\) vertices. Returns —— HemiSphere : Half of a unit sphere. See Also ——– create_unit_sphere, Sphere, HemiSphere
(along one dimension only)
(Ni are even)
analysis filter for the columns af[:, 1]  lowpass filter af[:, 2]  highpass filter
dimension of filtering (d = 1, 2 or 3)
lowpass subbands
highpass subbands
3D Analysis Filter Bank
N1 by N2 by N3 array matrix, where 1) N1, N2, N3 all even 2) N1 >= 2*len(af1) 3) N2 >= 2*len(af2) 4) N3 >= 2*len(af3)
analysis filters for dimension i afi[:, 1]  lowpass filter afi[:, 2]  highpass filter
lowpass subband
highpass subbands, h[d] d = 1..7