core
core.benchmarks
core.benchmarks.bench_sphere
core.geometry
core.gradients
core.graph
core.histeq
core.ndindex
core.onetime
core.optimize
core.profile
core.rng
core.sphere
core.sphere_stats
core.subdivide_octahedron
core.wavelet
Timer
GradientTable
HemiSphere
Graph
OneTimeProperty
ResetMixin
NonNegativeLeastSquares
Optimizer
SKLearnLinearSolver
Profiler
HemiSphere
Sphere
permutations
HemiSphere
core
Core objects

Run tests for module using nose. 
core.benchmarks.bench_sphere
Benchmarks for sphere
Run all benchmarks with:
import dipy.core as dipycore
dipycore.bench()
With Pytest, Run this benchmark with:
pytest svv c bench.ini /path/to/bench_sphere.py
Methods 




core.geometry
Utility functions for algebra etc

Return angles for Cartesian 3D coordinates x, y, and z 

Cartesian distance between pts1 and pts2 

a, b and c are 3dimensional vectors which are the vertices of a triangle. 

Return 4x4 transformation matrix from sequence of transformations. 

Compose multiple 4x4 affine transformations in one 4x4 matrix 

Return sequence of transformations from transformation matrix. 

Calculate the maximal distance from the center to a corner of a voxel, given an affine 

Return homogeneous rotation matrix from Euler angles and axis sequence. 

Test whether all points on a unit sphere lie in the same hemisphere. 

Lambert Equal Area Projection from cartesian vector to plane 

Lambert Equal Area Projection from polar sphere to plane 
Least squares positive semidefinite tensor estimation 


Return vector divided by its Euclidean (L2) norm 

Computes n evenly spaced perpendicular directions relative to a given vector v 

Rodrigues formula 

Convert spherical coordinates to latitude and longitude. 

Spherical to Cartesian coordinates 

Distance across sphere surface between pts1 and pts2 

rotation matrix from 2 unit vectors 

Cosine of angle between two (sets of) vectors 

Return vector Euclidean (L2) norm 
core.gradients

Diffusion gradient information 

Points on the unit sphere. 

Decorator to create OneTimeProperty attributes. 

Compute trace, anisotropy and assymetry parameters from btensors 

Check if you have enough different bvalues in your gradient table 

Return decorator function function for deprecation warning / error. 

Models electrostatic repulsion on the unit sphere 

Generates N bvectors. 

Get indices where the bvalue is bval 

A general function for creating diffusion MR gradients. 

Creates a GradientTable from a bvals array and a bvecs array 
A general function for creating diffusion MR gradients. 


A general function for creating diffusion MR gradients. 

Compute the inverse of a matrix. 

Compute btensor from trace, anisotropy and assymetry parameters 

Compute the polar decomposition. 

Reorient the directions in a GradientTable. 

“This function rounds the bvalues 

This function gives the unique rounded bvalues of the data 

This function gives the unique rounded bvalues of the data 

Gives the unique bvalues of the data, within a tolerance gap 

rotation matrix from 2 unit vectors 

Return vector Euclidean (L2) norm 

Issue a warning, or maybe ignore it or raise an exception. 
core.histeq

Performs an histogram equalization on 
core.ndindex

Create a view into the array with the given shape and strides. 

An Ndimensional iterator object to index arrays. 
core.onetime
Descriptor support for NIPY.
Copyright (c) 20062011, NIPY Developers All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
 Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
 Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
 Neither the name of the NIPY Developers nor the names of any
contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Utilities to support special Python descriptors [1,2], in particular the use of a useful pattern for properties we call ‘one time properties’. These are object attributes which are declared as properties, but become regular attributes once they’ve been read the first time. They can thus be evaluated later in the object’s life cycle, but once evaluated they become normal, static attributes with no function call overhead on access or any other constraints.
A special ResetMixin class is provided to add a .reset() method to users who may want to have their objects capable of resetting these computed properties to their ‘untriggered’ state.
[1] HowTo Guide for Descriptors, Raymond Hettinger. http://users.rcn.com/python/download/Descriptor.htm
[2] Python data model, http://docs.python.org/reference/datamodel.html

A descriptor to make special properties that become normal attributes. 
A Mixin class to add a .reset() method to users of OneTimeProperty. 


Decorator to create OneTimeProperty attributes. 
core.optimize
A unified interface for performing and debugging optimization problems.

A sklearnlike interface to scipy.optimize.nnls 



Provide a sklearnlike uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\) 

Minimization of scalar function of one or more variables. 

Solve y=Xh for h, using gradient descent, with X a sparse matrix. 

The same as np.dot(A, B), except it works even if A or B or both are sparse matrices. 
core.profile
Class for profiling cython code

Profile python/cython files or functions 

Return packagelike thing and module setup for package name 
core.rng
Random number generation utilities.

Return a LEcuyer random number generator. 

Algorithm AS 183 Appl. 

Wichmann Hill (2006) random number generator. 

Queries the given executable (defaults to the Python interpreter binary) for various architecture information. 

Return the floor of x as an Integral. 
core.sphere

Points on the unit sphere. 

Points on the unit sphere. 

Decorator to create OneTimeProperty attributes. 

Return angles for Cartesian 3D coordinates x, y, and z 

Models electrostatic repulsion on the unit sphere 

Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp. 

Checks the euler characteristic of a sphere 

Triangulate a set of vertices on the sphere. 
Remove vertices that are less than theta degrees from any other 


Spherical to Cartesian coordinates 

Extract all unique edges from given triangular faces. 

Remove duplicate sets. 

Return vector Euclidean (L2) norm 
core.sphere_stats
Statistics on spheres
permutations(iterable[, r]) –> permutations object 


Computes the cosine distance of the best match between points of two sets of vectors S and T 

Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity) 

Principal direction and confidence ellipse 

Random unit vectors from a uniform distribution on the sphere. 
core.subdivide_octahedron
Create a unit sphere by subdividing all triangles of an octahedron recursively.
The unit sphere has a radius of 1, which also means that all points in this sphere (assumed to have centre at [0, 0, 0]) have an absolute value (modulus) of 1. Another feature of the unit sphere is that the unit normals of this sphere are exactly the same as the vertices.
This recursive method will avoid the common problem of the polar singularity, produced by 2d (lonlat) parameterization methods.

Points on the unit sphere. 

Creates a unit sphere by subdividing a unit octahedron, returns half the sphere. 

Creates a unit sphere by subdividing a unit octahedron. 
core.wavelet

3D Analysis Filter Bank 

3D Analysis Filter Bank 

3D Circular Shift 

3D Discrete Wavelet Transform 

Inverse 3D Discrete Wavelet Transform 

Function generating inverse of the permutation 

3D Synthesis Filter Bank 

3D Synthesis Filter Bank 
dipy.core.
test
(label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, raise_warnings=None, timer=False)Run tests for module using nose.
Identifies the tests to run. This can be a string to pass to the nosetests executable with the ‘A’ option, or one of several special values. Special values are:
‘fast’  the default  which corresponds to the nosetests A
option of ‘not slow’.
‘full’  fast (as above) and slow tests as in the ‘no A’ option to nosetests  this is the same as ‘’.
None or ‘’  run all tests.
attribute_identifier  string passed directly to nosetests as ‘A’.
Verbosity value for test outputs, in the range 110. Default is 1.
List with any extra arguments to pass to nosetests.
If True, run doctests in module. Default is False.
If True, report coverage of NumPy code. Default is False. (This requires the coverage module).
This specifies which warnings to configure as ‘raise’ instead of being shown once during the test execution. Valid strings are:
“develop” : equals (Warning,)
“release” : equals ()
, do not raise on any warnings.
Timing of individual tests with nosetimer
(which needs to be
installed). If True, time tests and report on all of them.
If an integer (say N
), report timing results for N
slowest
tests.
Returns the result of running the tests as a
nose.result.TextTestResult
object.
Notes
Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:
>>> np.lib.test()
Examples
>>> result = np.lib.test()
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s
OK
>>> result.errors
[]
>>> result.knownfail
[]
dipy.core.geometry.
cart2sphere
(x, y, z)Return angles for Cartesian 3D coordinates x, y, and z
See doc for sphere2cart
for angle conventions and derivation
of the formulae.
\(0\le\theta\mathrm{(theta)}\le\pi\) and \(\pi\le\phi\mathrm{(phi)}\le\pi\)
x coordinate in Cartesian space
y coordinate in Cartesian space
z coordinate
radius
inclination (polar) angle
azimuth angle
dipy.core.geometry.
cart_distance
(pts1, pts2)Cartesian distance between pts1 and pts2
If either of pts1 or pts2 is 2D, then we take the first dimension to index points, and the second indexes coordinate. More generally, we take the last dimension to be the coordinate dimension.
where N is the number of points and R is the number of
coordinates defining a point (R==3
for 3D)
where N is the number of points and R is the number of
coordinates defining a point (R==3
for 3D). It should be
possible to broadcast pts1 against pts2
Cartesian distances between corresponding points in pts1 and pts2
See also
sphere_distance
distance between points on sphere surface
Examples
>>> cart_distance([0,0,0], [0,0,3])
3.0
dipy.core.geometry.
circumradius
(a, b, c)a, b and c are 3dimensional vectors which are the vertices of a triangle. The function returns the circumradius of the triangle, i.e the radius of the smallest circle that can contain the triangle. In the degenerate case when the 3 points are collinear it returns half the distance between the furthest apart points.
the three vertices of the triangle
the desired circumradius
dipy.core.geometry.
compose_matrix
(scale=None, shear=None, angles=None, translate=None, perspective=None)Return 4x4 transformation matrix from sequence of transformations.
Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
This is the inverse of the decompose_matrix
function.
Scaling factors.
Shear factors for xy, xz, yz axes.
Euler angles about static x, y, z axes.
Translation vector along x, y, z axes.
Perspective partition of matrix.
Examples
>>> import math
>>> import numpy as np
>>> import dipy.core.geometry as gm
>>> scale = np.random.random(3)  0.5
>>> shear = np.random.random(3)  0.5
>>> angles = (np.random.random(3)  0.5) * (2*math.pi)
>>> trans = np.random.random(3)  0.5
>>> persp = np.random.random(4)  0.5
>>> M0 = gm.compose_matrix(scale, shear, angles, trans, persp)
dipy.core.geometry.
decompose_matrix
(matrix)Return sequence of transformations from transformation matrix.
Code modified from the excellent work of Christoph Gohlke link provided here: http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
Nondegenerative homogeneous transformation matrix
Three scaling factors.
Shear factors for xy, xz, yz axes.
Euler angles about static x, y, z axes.
Translation vector along x, y, z axes.
Perspective partition of matrix.
If matrix is of wrong type or degenerative.
Examples
>>> import numpy as np
>>> T0=np.diag([2,1,1,1])
>>> scale, shear, angles, trans, persp = decompose_matrix(T0)
dipy.core.geometry.
dist_to_corner
(affine)Calculate the maximal distance from the center to a corner of a voxel, given an affine
The spatial transformation from the measurement to the scanner space.
The maximal distance to the corner of a voxel, given voxel size encoded in the affine.
dipy.core.geometry.
euler_matrix
(ai, aj, ak, axes='sxyz')Return homogeneous rotation matrix from Euler angles and axis sequence.
Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
Examples
>>> import numpy
>>> R = euler_matrix(1, 2, 3, 'syxz')
>>> numpy.allclose(numpy.sum(R[0]), 1.34786452)
True
>>> R = euler_matrix(1, 2, 3, (0, 1, 0, 1))
>>> numpy.allclose(numpy.sum(R[0]), 0.383436184)
True
>>> ai, aj, ak = (4.0*math.pi) * (numpy.random.random(3)  0.5)
>>> for axes in _AXES2TUPLE.keys():
... _ = euler_matrix(ai, aj, ak, axes)
>>> for axes in _TUPLE2AXES.keys():
... _ = euler_matrix(ai, aj, ak, axes)
dipy.core.geometry.
is_hemispherical
(vecs)Test whether all points on a unit sphere lie in the same hemisphere.
2D numpy array with shape (N, 3) where N is the number of points. All points must lie on the unit sphere.
If True, one can find a hemisphere that contains all the points. If False, then the points do not lie in any hemisphere
If is_hemi == True, then pole is the “central” pole of the input vectors. Otherwise, pole is the zero vector.
References
https://rstudiopubsstatic.s3.amazonaws.com/27121_a22e51b47c544980bad594d5e0bb2d04.html # noqa
dipy.core.geometry.
lambert_equal_area_projection_cart
(x, y, z)Lambert Equal Area Projection from cartesian vector to plane
Return positions in \((y_1,y_2)\) plane corresponding to the directions of the vectors with cartesian coordinates xyz under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161).
The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2. and vice versa.
See doc for sphere2cart
for angle conventions
x coordinate in Cartesion space
y coordinate in Cartesian space
z coordinate
planar coordinates of points following mapping by Lambert’s EAP.
dipy.core.geometry.
lambert_equal_area_projection_polar
(theta, phi)Lambert Equal Area Projection from polar sphere to plane
Return positions in (y1,y2) plane corresponding to the points with polar coordinates (theta, phi) on the unit sphere, under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161).
See doc for sphere2cart
for angle conventions
\(0 \le \theta \le \pi\) and \(0 \le \phi \le 2 \pi\)
\((y_1,y_2) \le 2\)
The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, and vice versa.
theta spherical coordinates
phi spherical coordinates
planar coordinates of points following mapping by Lambert’s EAP.
dipy.core.geometry.
nearest_pos_semi_def
(B)Least squares positive semidefinite tensor estimation
B matrix  symmetric. We do not check the symmetry.
Estimated nearest positive semidefinite array to matrix B.
References
Niethammer M, San Jose Estepar R, Bouix S, Shenton M, Westin CF. On diffusion tensor estimation. Conf Proc IEEE Eng Med Biol Soc. 2006;1:26225. PubMed PMID: 17946125; PubMed Central PMCID: PMC2791793.
Examples
>>> B = np.diag([1, 1, 1])
>>> nearest_pos_semi_def(B)
array([[ 0.75, 0. , 0. ],
[ 0. , 0.75, 0. ],
[ 0. , 0. , 0. ]])
dipy.core.geometry.
normalized_vector
(vec, axis=1)Return vector divided by its Euclidean (L2) norm
See unit vector and Euclidean norm
vector divided by L2 norm
Examples
>>> vec = [1, 2, 3]
>>> l2n = np.sqrt(np.dot(vec, vec))
>>> nvec = normalized_vector(vec)
>>> np.allclose(np.array(vec) / l2n, nvec)
True
>>> vec = np.array([[1, 2, 3]])
>>> vec.shape == (1, 3)
True
>>> normalized_vector(vec).shape == (1, 3)
True
dipy.core.geometry.
perpendicular_directions
(v, num=30, half=False)Computes n evenly spaced perpendicular directions relative to a given vector v
Array containing the three cartesian coordinates of vector v
Number of perpendicular directions to generate
If half is True, perpendicular directions are sampled on half of the unit circumference perpendicular to v, otherwive perpendicular directions are sampled on the full circumference. Default of half is False
array of vectors perpendicular to v
Notes
Perpendicular directions are estimated using the following two step procedure:
1) the perpendicular directions are first sampled in a unit circumference parallel to the plane normal to the xaxis.
2) Samples are then rotated and aligned to the plane normal to vector v. The rotational matrix for this rotation is constructed as reference frame basis which axis are the following:
The first axis is vector v
The second axis is defined as the normalized vector given by the
cross product between vector v and the unit vector aligned to the xaxis  The third axis is defined as the cross product between the previous computed vector and vector v.
Following this two steps, coordinates of the final perpendicular directions are given as:
This procedure has a singularity when vector v is aligned to the xaxis. To solve this singularity, perpendicular directions in procedure’s step 1 are defined in the plane normal to yaxis and the second axis of the rotated frame of reference is computed as the normalized vector given by the cross product between vector v and the unit vector aligned to the yaxis. Following this, the coordinates of the perpendicular directions are given as:
left [ frac{left (v_{x}v_{y}sin(a_{i})+v_{z}cos(a_{i}) right )} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} ; , ; sin(a_{i}) sqrt{{v_{x}}^{2}+{v_{z}}^{2}} ; , ; frac{v_{y}v_{z}sin(a_{i})+v_{x}cos(a_{i})} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} right ]
For more details on this calculation, see ` here <http://gsoc2015dipydki.blogspot.it/2015/07/rnhpost8computingperpendicular.html>`_.
dipy.core.geometry.
rodrigues_axis_rotation
(r, theta)Rodrigues formula
Rotation matrix for rotation around axis r for angle theta.
The rotation matrix is given by the Rodrigues formula:
R = Id + sin(theta)*Sn + (1cos(theta))*Sn^2
with:
0 nz ny
Sn = nz 0 nx
ny nx 0
where n = r / r
In case the angle r is very small, the above formula may lead to numerical instabilities. We instead use a Taylor expansion around theta=0:
R = I + sin(theta)/tetha Sr + (1cos(theta))/teta2 Sr^2
leading to:
R = I + (1theta2/6)*Sr + (1/2theta2/24)*Sr^2
Examples
>>> import numpy as np
>>> from dipy.core.geometry import rodrigues_axis_rotation
>>> v=np.array([0,0,1])
>>> u=np.array([1,0,0])
>>> R=rodrigues_axis_rotation(v,40)
>>> ur=np.dot(R,u)
>>> np.round(np.rad2deg(np.arccos(np.dot(ur,u))))
40.0
dipy.core.geometry.
sphere2cart
(r, theta, phi)Spherical to Cartesian coordinates
This is the standard physics convention where theta is the inclination (polar) angle, and phi is the azimuth angle.
Imagine a sphere with center (0,0,0). Orient it with the z axis running southnorth, the y axis running westeast and the x axis from posterior to anterior. theta (the inclination angle) is the angle to rotate from the zaxis (the zenith) around the yaxis, towards the x axis. Thus the rotation is counterclockwise from the point of view of positive y. phi (azimuth) gives the angle of rotation around the zaxis towards the y axis. The rotation is counterclockwise from the point of view of positive z.
Equivalently, given a point P on the sphere, with coordinates x, y, z, theta is the angle between P and the zaxis, and phi is the angle between the projection of P onto the XY plane, and the X axis.
Geographical nomenclature designates theta as ‘colatitude’, and phi as ‘longitude’
radius
inclination or polar angle
azimuth angle
x coordinate(s) in Cartesion space
y coordinate(s) in Cartesian space
z coordinate
Notes
See these pages:
for excellent discussion of the many different conventions possible. Here we use the physics conventions, used in the wikipedia page.
Derivations of the formulae are simple. Consider a vector x, y, z of length r (norm of x, y, z). The inclination angle (theta) can be found from: cos(theta) == z / r > z == r * cos(theta). This gives the hypotenuse of the projection onto the XY plane, which we will call Q. Q == r*sin(theta). Now x / Q == cos(phi) > x == r * sin(theta) * cos(phi) and so on.
We have deliberately named this function sphere2cart
rather than
sph2cart
to distinguish it from the Matlab function of that
name, because the Matlab function uses an unusual convention for the
angles that we did not want to replicate. The Matlab function is
trivial to implement with the formulae given in the Matlab help.
dipy.core.geometry.
sphere_distance
(pts1, pts2, radius=None, check_radius=True)Distance across sphere surface between pts1 and pts2
where N is the number of points and R is the number of
coordinates defining a point (R==3
for 3D)
where N is the number of points and R is the number of
coordinates defining a point (R==3
for 3D). It should be
possible to broadcast pts1 against pts2
Radius of sphere. Default is to work out radius from mean of the length of each point vector
If True, check if the points are on the sphere surface  i.e check if the vector lengths in pts1 and pts2 are close to radius. Default is True.
Distances between corresponding points in pts1 and pts2 across the spherical surface, i.e. the great circle distance
See also
cart_distance
cartesian distance between points
vector_cosine
cosine of angle between vectors
Examples
>>> print('%.4f' % sphere_distance([0,1],[1,0]))
1.5708
>>> print('%.4f' % sphere_distance([0,3],[3,0]))
4.7124
dipy.core.geometry.
vec2vec_rotmat
(u, v)rotation matrix from 2 unit vectors
u, v being unit 3d vectors return a 3x3 rotation matrix R than aligns u to v.
In general there are many rotations that will map u to v. If S is any rotation using v as an axis then R.S will also map u to v since (S.R)u = S(Ru) = Sv = v. The rotation R returned by vec2vec_rotmat leaves fixed the perpendicular to the plane spanned by u and v.
The transpose of R will align v to u.
Examples
>>> import numpy as np
>>> from dipy.core.geometry import vec2vec_rotmat
>>> u=np.array([1,0,0])
>>> v=np.array([0,1,0])
>>> R=vec2vec_rotmat(u,v)
>>> np.dot(R,u)
array([ 0., 1., 0.])
>>> np.dot(R.T,v)
array([ 1., 0., 0.])
dipy.core.geometry.
vector_cosine
(vecs1, vecs2)Cosine of angle between two (sets of) vectors
The cosine of the angle between two vectors v1
and v2
is
given by the inner product of v1
and v2
divided by the
product of the vector lengths:
v_cos = np.inner(v1, v2) / (np.sqrt(np.sum(v1**2)) *
np.sqrt(np.sum(v2**2)))
N vectors (as rows) or single vector. Vectors have R elements.
N vectors (as rows) or single vector. Vectors have R elements. It should be possible to broadcast vecs1 against vecs2
Vector cosines. To get the angles you will need np.arccos
Notes
The vector cosine will be the same as the correlation only if all the input vectors have zero mean.
dipy.core.geometry.
vector_norm
(vec, axis=1, keepdims=False)Return vector Euclidean (L2) norm
See unit vector and Euclidean norm
Vectors to norm.
Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.
If True, the output will have the same number of dimensions as vec, with shape 1 on axis.
Euclidean norms of vectors.
Examples
>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17., 85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
[ 85.]])
>>> vector_norm(vec, axis=0)
array([ 8., 39., 77.])
GradientTable
dipy.core.gradients.
GradientTable
(gradients, big_delta=None, small_delta=None, b0_threshold=50, btens=None)Bases: object
Diffusion gradient information
Diffusion gradients. The direction of each of these vectors corresponds to the bvector, and the length corresponds to the bvalue.
Gradients with bvalue less than or equal to b0_threshold are considered as b0s i.e. without diffusion weighting.
See also
Notes
The GradientTable object is immutable. Do NOT assign attributes. If you have your gradient table in a bval & bvec format, we recommend using the factory function gradient_table
diffusion gradients
The bvalue, or magnitude, of each gradient direction.
The qvalue for each gradient direction. Needs big and small delta.
The direction, represented as a unit vector, of each gradient.
Boolean array indicating which gradients have no diffusion weighting, ie bvalue is close to 0.
Gradients with bvalue less than or equal to b0_threshold are considered to not have diffusion weighting.
The btensor of each gradient direction.
Methods
b0s_mask 

bvals 

bvecs 

gradient_strength 

qvals 

tau 
HemiSphere
dipy.core.gradients.
HemiSphere
(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e05)Bases: dipy.core.sphere.Sphere
Points on the unit sphere.
A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.
The HemiSphere can be constructed using one of three conventions:
HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.
See also
Sphere
Methods

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry 

Create instance from a Sphere 

Create a full Sphere from a HemiSphere 

Create a more subdivided HemiSphere 
edges 

faces 

vertices 
__init__
(self, x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e05)Create a HemiSphere from points
dipy.core.gradients.
auto_attr
(func)Decorator to create OneTimeProperty attributes.
The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.
Examples
>>> class MagicProp(object):
... @auto_attr
... def a(self):
... return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True
dipy.core.gradients.
btens_to_params
(btens, ztol=1e10)Compute trace, anisotropy and assymetry parameters from btensors
input btensor, or btensors, where N = number of btensors
Any parameters smaller than this value are considered to be 0
bvalue(s) (trace(s))
normalized tensor anisotropy(s)
tensor assymetry(s)
Notes
This function can be used to get btensor parameters directly from the GradientTable btens attribute.
Examples
>>> lte = np.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]])
>>> bval, bdelta, b_eta = btens_to_params(lte)
>>> print("bval={}; bdelta={}; b_eta={}".format(bdelta, bval, b_eta))
bval=[ 1.]; bdelta=[ 1.]; b_eta=[ 0.]
dipy.core.gradients.
check_multi_b
(gtab, n_bvals, non_zero=True, bmag=None)Check if you have enough different bvalues in your gradient table
The number of different bvalues you are checking for.
Whether to check only nonzero bvalues. In this case, we will require at least n_bvals nonzero bvalues (where nonzero is defined depending on the gtab object’s b0_threshold attribute)
The order of magnitude of the bvalues used. The function will normalize the bvalues relative \(10^{bmag}\). Default: derive this value from the maximal bvalue provided: \(bmag=log_{10}(max(bvals))  1\).
dipy.core.gradients.
deprecate_with_version
(message, since='', until='', version_comparator=<function cmp_pkg_version at 0x12c304b00>, warn_class=<class 'DeprecationWarning'>, error_class=<class 'dipy.utils.deprecator.ExpiredDeprecationError'>)Return decorator function function for deprecation warning / error.
The decorated function / method will:
Raise the given warning_class warning when the function / method gets called, up to (and including) version until (if specified);
Raise the given error_class error when the function / method gets called, when the package version is greater than version until (if specified).
Message explaining deprecation, giving possible alternatives.
Released version at which object was first deprecated.
Last released version at which this function will still raise a deprecation warning. Versions higher than this will raise an error.
Callable accepting string as argument, and return 1 if string represents a higher version than encoded in the version_comparator, 0 if the version is equal, and 1 if the version is lower. For example, the version_comparator may compare the input version string to the current package version string.
Class of warning to generate for deprecation.
Class of error to generate when version_comparator returns 1 for a
given argument of until
.
Function returning a decorator.
dipy.core.gradients.
disperse_charges
(hemi, iters, const=0.2)Models electrostatic repulsion on the unit sphere
Places charges on a sphere and simulates the repulsive forces felt by each one. Allows the charges to move for some number of iterations and returns their final location as well as the total potential of the system at each step.
Points on a unit sphere.
Number of iterations to run.
Using a smaller const could provide a more accurate result, but will need more iterations to converge.
Distributed points on a unit sphere.
The electrostatic potential at each iteration. This can be useful to check if the repulsion converged to a minimum.
Notes
This function is meant to be used with diffusion imaging so antipodal symmetry is assumed. Therefor each charge must not only be unique, but if there is a charge at +x, there cannot be a charge at x. These are treated as the same location and because the distance between the two charges will be zero, the result will be unstable.
dipy.core.gradients.
generate_bvecs
(N, iters=5000)Generates N bvectors.
Uses dipy.core.sphere.disperse_charges to model electrostatic repulsion on a unit sphere.
The number of bvectors to generate. This should be equal to the number of bvals used.
Number of iterations to run.
The generated directions, represented as a unit vector, of each gradient.
dipy.core.gradients.
get_bval_indices
(bvals, bval, tol=20)Get indices where the bvalue is bval
Array containing the bvalues
bvalue to extract indices
The tolerated gap between the bvalues to extract and the actual bvalues.
dipy.core.gradients.
gradient_table
(bvals, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01, btens=None)A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the bvalues and bvectors so that they can be useful during the reconstruction process.
an array of shape (N,) or (1, N) or (N, 1) with the bvalues.
a path for the file which contains an array like the above (1).
an array of shape (N, 4) or (4, N). Then this parameter is considered to be a btable which contains both bvals and bvecs. In this case the next parameter is skipped.
a path for the file which contains an array like the one at (3).
an array of shape (N, 3) or (3, N) with the bvectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds (default None)
acquisition pulse duration time in seconds (default None)
All bvalues with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All bvectors need to be unit vectors up to a tolerance.
a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigarshaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the bvalue.
an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigarshaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the bvalue.
an array of shape (N,3,3) specifying the btensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.
A GradientTable with all the gradient information.
Notes
Often b0s (bvalues which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of bvalues is 7.
Bvectors should be unit vectors.
Examples
>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False
dipy.core.gradients.
gradient_table_from_bvals_bvecs
(bvals, bvecs, b0_threshold=50, atol=0.01, btens=None, **kwargs)Creates a GradientTable from a bvals array and a bvecs array
The bvalue, or magnitude, of each gradient direction.
The direction, represented as a unit vector, of each gradient.
Gradients with bvalue less than or equal to bo_threshold are considered to not have diffusion weighting.
Each vector in bvecs must be a unit vectors up to a tolerance of atol.
a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigarshaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the bvalue.
an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigarshaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the bvalue.
an array of shape (N,3,3) specifying the btensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.
A GradientTable with all the gradient information.
Other keyword inputs are passed to GradientTable.
See also
dipy.core.gradients.
gradient_table_from_gradient_strength_bvecs
(gradient_strength, bvecs, big_delta, small_delta, b0_threshold=50, atol=0.01)A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the bvalues and bvectors so that they can be useful during the reconstruction process.
gradient strength given in T/mm
an array of shape (N, 3) or (3, N) with the bvectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds
acquisition pulse duration time in seconds
All bvalues with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All bvectors need to be unit vectors up to a tolerance.
A GradientTable with all the gradient information.
Notes
Often b0s (bvalues which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of bvalues is 7.
Bvectors should be unit vectors.
Examples
>>> from dipy.core.gradients import (
... gradient_table_from_gradient_strength_bvecs)
>>> gradient_strength = .03e3 * np.ones(7) # clinical strength at 30 mT/m
>>> big_delta = .03 # pulse separation of 30ms
>>> small_delta = 0.01 # pulse duration of 10ms
>>> gradient_strength[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table_from_gradient_strength_bvecs(
... gradient_strength, bvecs, big_delta, small_delta)
dipy.core.gradients.
gradient_table_from_qvals_bvecs
(qvals, bvecs, big_delta, small_delta, b0_threshold=50, atol=0.01)A general function for creating diffusion MR gradients.
It reads, loads and prepares scanner parameters like the bvalues and bvectors so that they can be useful during the reconstruction process.
qvalue given in 1/mm
an array of shape (N, 3) or (3, N) with the bvectors.
a path for the file which contains an array like the previous.
acquisition pulse separation time in seconds
acquisition pulse duration time in seconds
All bvalues with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.
All bvectors need to be unit vectors up to a tolerance.
A GradientTable with all the gradient information.
Notes
Often b0s (bvalues which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
We assume that the minimum number of bvalues is 7.
Bvectors should be unit vectors.
Examples
>>> from dipy.core.gradients import gradient_table_from_qvals_bvecs
>>> qvals = 30. * np.ones(7)
>>> big_delta = .03 # pulse separation of 30ms
>>> small_delta = 0.01 # pulse duration of 10ms
>>> qvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1],
... [sq2, sq2, 0],
... [sq2, 0, sq2],
... [0, sq2, sq2]])
>>> gt = gradient_table_from_qvals_bvecs(qvals, bvecs,
... big_delta, small_delta)
dipy.core.gradients.
inv
(a, overwrite_a=False, check_finite=True)Compute the inverse of a matrix.
Square matrix to be inverted.
Discard data in a (may improve performance). Default is False.
Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, nontermination) if the inputs do contain infinities or NaNs.
Inverse of the matrix a.
If a is singular.
If a is not square, or not 2D.
Examples
>>> from scipy import linalg
>>> a = np.array([[1., 2.], [3., 4.]])
>>> linalg.inv(a)
array([[2. , 1. ],
[ 1.5, 0.5]])
>>> np.dot(a, linalg.inv(a))
array([[ 1., 0.],
[ 0., 1.]])
dipy.core.gradients.
params_to_btens
(bval, bdelta, b_eta)Compute btensor from trace, anisotropy and assymetry parameters
bvalue (>= 0)
normalized tensor anisotropy (>= 0.5 and <= 1)
tensor assymetry (>= 0 and <= 1)
output btensor
Notes
Implements eq. 7.11. p. 231 in [1].
References
Topgaard, NMR methods for studying microscopic diffusion
anisotropy, in: R. Valiullin (Ed.), Diffusion NMR of Confined Systems: Fluid Transport in Porous Solids and Heterogeneous Materials, Royal Society of Chemistry, Cambridge, UK, 2016.
dipy.core.gradients.
polar
(a, side='right')Compute the polar decomposition.
Returns the factors of the polar decomposition [1] u and p such
that a = up
(if side is “right”) or a = pu
(if side is
“left”), where p is positive semidefinite. Depending on the shape
of a, either the rows or columns of u are orthonormal. When a
is a square array, u is a square unitary array. When a is not
square, the “canonical polar decomposition” [2] is computed.
The array to be factored.
Determines whether a right or left polar decomposition is computed.
If side is “right”, then a = up
. If side is “left”, then
a = pu
. The default is “right”.
If a is square, then u is unitary. If m > n, then the columns of a are orthonormal, and if m < n, then the rows of u are orthonormal.
p is Hermitian positive semidefinite. If a is nonsingular, p is positive definite. The shape of p is (n, n) or (m, m), depending on whether side is “right” or “left”, respectively.
References
R. A. Horn and C. R. Johnson, “Matrix Analysis”, Cambridge University Press, 1985.
N. J. Higham, “Functions of Matrices: Theory and Computation”, SIAM, 2008.
Examples
>>> from scipy.linalg import polar
>>> a = np.array([[1, 1], [2, 4]])
>>> u, p = polar(a)
>>> u
array([[ 0.85749293, 0.51449576],
[ 0.51449576, 0.85749293]])
>>> p
array([[ 1.88648444, 1.2004901 ],
[ 1.2004901 , 3.94446746]])
A nonsquare example, with m < n:
>>> b = np.array([[0.5, 1, 2], [1.5, 3, 4]])
>>> u, p = polar(b)
>>> u
array([[0.21196618, 0.42393237, 0.88054056],
[ 0.39378971, 0.78757942, 0.4739708 ]])
>>> p
array([[ 0.48470147, 0.96940295, 1.15122648],
[ 0.96940295, 1.9388059 , 2.30245295],
[ 1.15122648, 2.30245295, 3.65696431]])
>>> u.dot(p) # Verify the decomposition.
array([[ 0.5, 1. , 2. ],
[ 1.5, 3. , 4. ]])
>>> u.dot(u.T) # The rows of u are orthonormal.
array([[ 1.00000000e+00, 2.07353665e17],
[ 2.07353665e17, 1.00000000e+00]])
Another nonsquare example, with m > n:
>>> c = b.T
>>> u, p = polar(c)
>>> u
array([[0.21196618, 0.39378971],
[0.42393237, 0.78757942],
[ 0.88054056, 0.4739708 ]])
>>> p
array([[ 1.23116567, 1.93241587],
[ 1.93241587, 4.84930602]])
>>> u.dot(p) # Verify the decomposition.
array([[ 0.5, 1.5],
[ 1. , 3. ],
[ 2. , 4. ]])
>>> u.T.dot(u) # The columns of u are orthonormal.
array([[ 1.00000000e+00, 1.26363763e16],
[ 1.26363763e16, 1.00000000e+00]])
dipy.core.gradients.
reorient_bvecs
(gtab, affines, atol=0.01)Reorient the directions in a GradientTable.
When correcting for motion, rotation of the diffusionweighted volumes might cause systematic bias in rotationally invariant measures, such as FA and MD, and also cause characteristic biases in tractography, unless the gradient directions are appropriately reoriented to compensate for this effect [Leemans2009].
The nominal gradient table with which the data were acquired.
Each entry in this list or array contain either an affine transformation (4,4) or a rotation matrix (3, 3). In both cases, the transformations encode the rotation that was applied to the image corresponding to one of the nonzero gradient directions (ordered according to their order in gtab.bvecs[~gtab.b0s_mask])
References
The BMatrix Must Be Rotated When Correcting for Subject Motion in DTI Data. Leemans, A. and Jones, D.K. (2009). MRM, 61: 13361349
dipy.core.gradients.
round_bvals
(bvals, bmag=None)“This function rounds the bvalues
Array containing the bvalues
The order of magnitude to round the bvalues. If not given bvalues will be rounded relative to the order of magnitude \(bmag = (bmagmax  1)\), where bmaxmag is the magnitude order of the larger bvalue.
Array containing the rounded bvalues
dipy.core.gradients.
unique_bvals
(bvals, bmag=None, rbvals=False)This function gives the unique rounded bvalues of the data
dipy.core.gradients.unique_bvals is deprecated, Please use dipy.core.gradients.unique_bvals_magnitude instead
deprecated from version: 1.2
Will raise <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.4
Array containing the bvalues
The order of magnitude that the bvalues have to differ to be considered an unique bvalue. Bvalues are also rounded up to this order of magnitude. Default: derive this value from the maximal bvalue provided: \(bmag=log_{10}(max(bvals))  1\).
If True function also returns all individual rounded bvalues. Default: False
Array containing the rounded unique bvalues
dipy.core.gradients.
unique_bvals_magnitude
(bvals, bmag=None, rbvals=False)This function gives the unique rounded bvalues of the data
Array containing the bvalues
The order of magnitude that the bvalues have to differ to be considered an unique bvalue. Bvalues are also rounded up to this order of magnitude. Default: derive this value from the maximal bvalue provided: \(bmag=log_{10}(max(bvals))  1\).
If True function also returns all individual rounded bvalues. Default: False
Array containing the rounded unique bvalues
dipy.core.gradients.
unique_bvals_tolerance
(bvals, tol=20)Gives the unique bvalues of the data, within a tolerance gap
The bvalues must be regrouped in clusters easily separated by a distance greater than the tolerance gap. If all the bvalues of a cluster fit within the tolerance gap, the highest bvalue is kept.
Array containing the bvalues
The tolerated gap between the bvalues to extract and the actual bvalues.
Array containing the unique bvalues using the median value for each cluster
dipy.core.gradients.
vec2vec_rotmat
(u, v)rotation matrix from 2 unit vectors
u, v being unit 3d vectors return a 3x3 rotation matrix R than aligns u to v.
In general there are many rotations that will map u to v. If S is any rotation using v as an axis then R.S will also map u to v since (S.R)u = S(Ru) = Sv = v. The rotation R returned by vec2vec_rotmat leaves fixed the perpendicular to the plane spanned by u and v.
The transpose of R will align v to u.
Examples
>>> import numpy as np
>>> from dipy.core.geometry import vec2vec_rotmat
>>> u=np.array([1,0,0])
>>> v=np.array([0,1,0])
>>> R=vec2vec_rotmat(u,v)
>>> np.dot(R,u)
array([ 0., 1., 0.])
>>> np.dot(R.T,v)
array([ 1., 0., 0.])
dipy.core.gradients.
vector_norm
(vec, axis=1, keepdims=False)Return vector Euclidean (L2) norm
See unit vector and Euclidean norm
Vectors to norm.
Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.
If True, the output will have the same number of dimensions as vec, with shape 1 on axis.
Euclidean norms of vectors.
Examples
>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17., 85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
[ 85.]])
>>> vector_norm(vec, axis=0)
array([ 8., 39., 77.])
Graph
dipy.core.graph.
Graph
Bases: object
A simple graph class
Methods
add_edge 

add_node 

all_paths 

children 

del_node 

del_node_and_edges 

down 

down_short 

parents 

shortest_path 

up 

up_short 
__init__
(self)A graph class with nodes and edges :)
This class allows us to:
find the shortest path
find all paths
add/delete nodes and edges
get parent & children nodes
Examples
>>> from dipy.core.graph import Graph
>>> g=Graph()
>>> g.add_node('a',5)
>>> g.add_node('b',6)
>>> g.add_node('c',10)
>>> g.add_node('d',11)
>>> g.add_edge('a','b')
>>> g.add_edge('b','c')
>>> g.add_edge('c','d')
>>> g.add_edge('b','d')
>>> g.up_short('d')
['d', 'b', 'a']
dipy.core.histeq.
histeq
(arr, num_bins=256)Performs an histogram equalization on arr
.
This was taken from:
http://www.janeriksolem.net/2009/06/histogramequalizationwithpythonand.html
Image on which to perform histogram equalization.
Number of bins used to construct the histogram.
Histogram equalized image.
dipy.core.ndindex.
as_strided
(x, shape=None, strides=None, subok=False, writeable=True)Create a view into the array with the given shape and strides.
Warning
This function has to be used with extreme care, see notes.
Array to create a new.
The shape of the new array. Defaults to x.shape
.
The strides of the new array. Defaults to x.strides
.
New in version 1.10.
If True, subclasses are preserved.
New in version 1.12.
If set to False, the returned array will always be readonly. Otherwise it will be writable if the original array was. It is advisable to set this to False if possible (see Notes).
See also
broadcast_to
broadcast an array to a given shape.
reshape
reshape an array.
Notes
as_strided
creates a view into the array given the exact strides
and shape. This means it manipulates the internal data structure of
ndarray and, if done incorrectly, the array elements can point to
invalid memory and can corrupt results or crash your program.
It is advisable to always use the original x.strides
when
calculating new strides to avoid reliance on a contiguous memory
layout.
Furthermore, arrays created with this function often contain self
overlapping memory, so that two elements are identical.
Vectorized write operations on such arrays will typically be
unpredictable. They may even give different results for small, large,
or transposed arrays.
Since writing to these arrays has to be tested and done with great
care, you may want to use writeable=False
to avoid accidental write
operations.
For these reasons it is advisable to avoid as_strided
when
possible.
dipy.core.ndindex.
ndindex
(shape)An Ndimensional iterator object to index arrays.
Given the shape of an array, an ndindex instance iterates over the Ndimensional index of the array. At each iteration a tuple of indices is returned; the last dimension is iterated over first.
The dimensions of the array.
Examples
>>> from dipy.core.ndindex import ndindex
>>> shape = (3, 2, 1)
>>> for index in ndindex(shape):
... print(index)
(0, 0, 0)
(0, 1, 0)
(1, 0, 0)
(1, 1, 0)
(2, 0, 0)
(2, 1, 0)
OneTimeProperty
dipy.core.onetime.
OneTimeProperty
(func)Bases: object
A descriptor to make special properties that become normal attributes.
This is meant to be used mostly by the auto_attr decorator in this module.
ResetMixin
dipy.core.onetime.
ResetMixin
Bases: object
A Mixin class to add a .reset() method to users of OneTimeProperty.
By default, auto attributes once computed, become static. If they happen to depend on other parts of an object and those parts change, their values may now be invalid.
This class offers a .reset() method that users can call explicitly when they know the state of their objects may have changed and they want to ensure that all their special attributes should be invalidated. Once reset() is called, all their auto attributes are reset to their OneTimeProperty descriptors, and their accessor functions will be triggered again.
Warning
If a class has a set of attributes that are OneTimeProperty, but that can be initialized from any one of them, do NOT use this mixin! For instance, UniformTimeSeries can be initialized with only sampling_rate and t0, sampling_interval and time are autocomputed. But if you were to reset() a UniformTimeSeries, it would lose all 4, and there would be then no way to break the circular dependency chains.
If this becomes a problem in practice (for our analyzer objects it isn’t, as they don’t have the above pattern), we can extend reset() to check for a _no_reset set of names in the instance which are meant to be kept protected. But for now this is NOT done, so caveat emptor.
Examples
>>> class A(ResetMixin):
... def __init__(self,x=1.0):
... self.x = x
...
... @auto_attr
... def y(self):
... print('*** y computation executed ***')
... return self.x / 2.0
...
>>> a = A(10)
About to access y twice, the second time no computation is done: >>> a.y * y computation executed * 5.0 >>> a.y 5.0
Changing x >>> a.x = 20
a.y doesn’t change to 10, since it is a static attribute: >>> a.y 5.0
We now reset a, and this will then force all auto attributes to recompute the next time we access them: >>> a.reset()
About to access y twice again after reset(): >>> a.y * y computation executed * 10.0 >>> a.y 10.0
Methods

Reset all OneTimeProperty attributes that may have fired already. 
dipy.core.onetime.
auto_attr
(func)Decorator to create OneTimeProperty attributes.
The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.
Examples
>>> class MagicProp(object):
... @auto_attr
... def a(self):
... return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True
NonNegativeLeastSquares
dipy.core.optimize.
NonNegativeLeastSquares
(*args, **kwargs)Bases: dipy.core.optimize.SKLearnLinearSolver
A sklearnlike interface to scipy.optimize.nnls
Methods

Fit the NonNegativeLeastSquares linear model to data 

Predict using the result of the model 
Optimizer
dipy.core.optimize.
Optimizer
(fun, x0, args=(), method='LBFGSB', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)Bases: object
Methods
print_summary 
__init__
(self, fun, x0, args=(), method='LBFGSB', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)A class for handling minimization of scalar function of one or more variables.
Objective function.
Initial guess.
Extra arguments passed to the objective function and its derivatives (Jacobian, Hessian).
Type of solver. Should be one of
‘NelderMead’
‘Powell’
‘CG’
‘BFGS’
‘NewtonCG’
‘Anneal’
‘LBFGSB’
‘TNC’
‘COBYLA’
‘SLSQP’
‘dogleg’
‘trustncg’
Jacobian of objective function. Only for CG, BFGS, NewtonCG, dogleg, trustncg. If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of the objective. In this case, it must accept the same arguments as fun.
Hessian of objective function or Hessian of objective function times an arbitrary vector p. Only for NewtonCG, dogleg, trustncg. Only one of hessp or hess needs to be given. If hess is provided, then hessp will be ignored. If neither hess nor hessp is provided, then the hessian product will be approximated using finite differences on jac. hessp must compute the Hessian times an arbitrary vector.
Bounds for variables (only for LBFGSB, TNC and SLSQP).
(min, max)
pairs for each element in x
, defining
the bounds on that parameter. Use None for one of min
or
max
when there is no bound in that direction.
Constraints definition (only for COBYLA and SLSQP). Each constraint is defined in a dictionary with fields:
 typestr
Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.
 funcallable
The function defining the constraint.
 jaccallable, optional
The Jacobian of fun (only for SLSQP).
 argssequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be nonnegative. Note that COBYLA only supports inequality constraints.
Tolerance for termination. For detailed control, use solverspecific options.
Called after each iteration, as callback(xk)
, where xk
is
the current parameter vector. Only available using Scipy >= 0.12.
A dictionary of solver options. All methods accept the following generic options:
 maxiterint
Maximum number of iterations to perform.
 dispbool
Set to True to print convergence messages.
For methodspecific options, see show_options(‘minimize’, method).
save history of x for each iteration. Only available using Scipy >= 0.12.
See also
scipy.optimize.minimize
SKLearnLinearSolver
dipy.core.optimize.
SKLearnLinearSolver
(*args, **kwargs)Bases: object
Provide a sklearnlike uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\)
Subclasses of SKLearnLinearSolver should provide a ‘fit’ method that have the following signature: SKLearnLinearSolver.fit(X, y), which would set an attribute SKLearnLinearSolver.coef_, with the shape (X.shape[1],), such that an estimate of y can be calculated as: y_hat = np.dot(X, SKLearnLinearSolver.coef_.T)
Methods

Implement for all derived classes 

Predict using the result of the model 
dipy.core.optimize.
minimize
(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)Minimization of scalar function of one or more variables.
The objective function to be minimized.
fun(x, *args) > float
where x
is an 1D array with shape (n,) and args
is a tuple of the fixed parameters needed to completely
specify the function.
Initial guess. Array of real elements of size (n,), where ‘n’ is the number of independent variables.
Extra arguments passed to the objective function and its derivatives (fun, jac and hess functions).
Type of solver. Should be one of
‘NelderMead’ (see here)
‘Powell’ (see here)
‘CG’ (see here)
‘BFGS’ (see here)
‘NewtonCG’ (see here)
‘LBFGSB’ (see here)
‘TNC’ (see here)
‘COBYLA’ (see here)
‘SLSQP’ (see here)
‘trustconstr’(see here)
‘dogleg’ (see here)
‘trustncg’ (see here)
‘trustexact’ (see here)
‘trustkrylov’ (see here)
custom  a callable object (added in version 0.14.0), see below for description.
If not given, chosen to be one of BFGS
, LBFGSB
, SLSQP
,
depending if the problem has constraints or bounds.
Method for computing the gradient vector. Only for CG, BFGS, NewtonCG, LBFGSB, TNC, SLSQP, dogleg, trustncg, trustkrylov, trustexact and trustconstr. If it is a callable, it should be a function that returns the gradient vector:
jac(x, *args) > array_like, shape (n,)
where x
is an array with shape (n,) and args
is a tuple with
the fixed parameters. If jac is a Boolean and is True, fun is
assumed to return and objective and gradient as and (f, g)
tuple.
Methods ‘NewtonCG’, ‘trustncg’, ‘dogleg’, ‘trustexact’, and
‘trustkrylov’ require that either a callable be supplied, or that
fun return the objective and gradient.
If None or False, the gradient will be estimated using 2point finite
difference estimation with an absolute step size.
Alternatively, the keywords {‘2point’, ‘3point’, ‘cs’} can be used
to select a finite difference scheme for numerical estimation of the
gradient with a relative step size. These finite difference schemes
obey any specified bounds.
Method for computing the Hessian matrix. Only for NewtonCG, dogleg, trustncg, trustkrylov, trustexact and trustconstr. If it is callable, it should return the Hessian matrix:
hess(x, *args) > {LinearOperator, spmatrix, array}, (n, n)
where x is a (n,) ndarray and args is a tuple with the fixed parameters. LinearOperator and sparse matrix returns are allowed only for ‘trustconstr’ method. Alternatively, the keywords {‘2point’, ‘3point’, ‘cs’} select a finite difference scheme for numerical estimation. Or, objects implementing HessianUpdateStrategy interface can be used to approximate the Hessian. Available quasiNewton methods implementing this interface are:
BFGS;
SR1.
Whenever the gradient is estimated via finitedifferences, the Hessian cannot be estimated with options {‘2point’, ‘3point’, ‘cs’} and needs to be estimated using one of the quasiNewton strategies. Finitedifference options {‘2point’, ‘3point’, ‘cs’} and HessianUpdateStrategy are available only for ‘trustconstr’ method.
Hessian of objective function times an arbitrary vector p. Only for NewtonCG, trustncg, trustkrylov, trustconstr. Only one of hessp or hess needs to be given. If hess is provided, then hessp will be ignored. hessp must compute the Hessian times an arbitrary vector:
hessp(x, p, *args) > ndarray shape (n,)
where x is a (n,) ndarray, p is an arbitrary vector with dimension (n,) and args is a tuple with the fixed parameters.
Bounds on variables for LBFGSB, TNC, SLSQP, Powell, and trustconstr methods. There are two ways to specify the bounds:
Instance of Bounds class.
Sequence of
(min, max)
pairs for each element in x. None is used to specify no bound.
Constraints definition (only for COBYLA, SLSQP and trustconstr). Constraints for ‘trustconstr’ are defined as a single object or a list of objects specifying constraints to the optimization problem. Available constraints are:
LinearConstraint
NonlinearConstraint
Constraints for COBYLA, SLSQP are defined as a list of dictionaries. Each dictionary with fields:
 typestr
Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.
 funcallable
The function defining the constraint.
 jaccallable, optional
The Jacobian of fun (only for SLSQP).
 argssequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be nonnegative. Note that COBYLA only supports inequality constraints.
Tolerance for termination. For detailed control, use solverspecific options.
A dictionary of solver options. All methods accept the following generic options:
 maxiterint
Maximum number of iterations to perform. Depending on the method each iteration may use several function evaluations.
 dispbool
Set to True to print convergence messages.
For methodspecific options, see show_options()
.
Called after each iteration. For ‘trustconstr’ it is a callable with the signature:
callback(xk, OptimizeResult state) > bool
where xk
is the current parameter vector. and state
is an OptimizeResult object, with the same fields
as the ones from the return. If callback returns True
the algorithm execution is terminated.
For all the other methods, the signature is:
callback(xk)
where xk
is the current parameter vector.
The optimization result represented as a OptimizeResult
object.
Important attributes are: x
the solution array, success
a
Boolean flag indicating if the optimizer exited successfully and
message
which describes the cause of the termination. See
OptimizeResult for a description of other attributes.
See also
minimize_scalar
Interface to minimization algorithms for scalar univariate functions
show_options
Additional options accepted by the solvers
Notes
This section describes the available solvers that can be selected by the ‘method’ parameter. The default method is BFGS.
Unconstrained minimization
Method NelderMead uses the Simplex algorithm [1], [2]. This algorithm is robust in many applications. However, if numerical computation of derivative can be trusted, other algorithms using the first and/or second derivatives information might be preferred for their better performance in general.
Method CG uses a nonlinear conjugate gradient algorithm by Polak and Ribiere, a variant of the FletcherReeves method described in [5] pp.120122. Only the first derivatives are used.
Method BFGS uses the quasiNewton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) [5] pp. 136. It uses the first derivatives only. BFGS has proven good performance even for nonsmooth optimizations. This method also returns an approximation of the Hessian inverse, stored as hess_inv in the OptimizeResult object.
Method NewtonCG uses a NewtonCG algorithm [5] pp. 168 (also known as the truncated Newton method). It uses a CG method to the compute the search direction. See also TNC method for a boxconstrained minimization with a similar algorithm. Suitable for largescale problems.
Method dogleg uses the dogleg trustregion algorithm [5] for unconstrained minimization. This algorithm requires the gradient and Hessian; furthermore the Hessian is required to be positive definite.
Method trustncg uses the Newton conjugate gradient trustregion algorithm [5] for unconstrained minimization. This algorithm requires the gradient and either the Hessian or a function that computes the product of the Hessian with a given vector. Suitable for largescale problems.
Method trustkrylov uses the Newton GLTR trustregion algorithm [14], [15] for unconstrained minimization. This algorithm requires the gradient and either the Hessian or a function that computes the product of the Hessian with a given vector. Suitable for largescale problems. On indefinite problems it requires usually less iterations than the trustncg method and is recommended for medium and largescale problems.
Method trustexact is a trustregion method for unconstrained minimization in which quadratic subproblems are solved almost exactly [13]. This algorithm requires the gradient and the Hessian (which is not required to be positive definite). It is, in many situations, the Newton method to converge in fewer iteraction and the most recommended for small and mediumsize problems.
BoundConstrained minimization
Method LBFGSB uses the LBFGSB algorithm [6], [7] for bound constrained minimization.
Method Powell is a modification of Powell’s method [3], [4] which is a conjugate direction method. It performs sequential onedimensional minimizations along each vector of the directions set (direc field in options and info), which is updated at each iteration of the main minimization loop. The function need not be differentiable, and no derivatives are taken. If bounds are not provided, then an unbounded line search will be used. If bounds are provided and the initial guess is within the bounds, then every function evaluation throughout the minimization procedure will be within the bounds. If bounds are provided, the initial guess is outside the bounds, and direc is full rank (default has full rank), then some function evaluations during the first iteration may be outside the bounds, but every function evaluation after the first iteration will be within the bounds. If direc is not full rank, then some parameters may not be optimized and the solution is not guaranteed to be within the bounds.
Method TNC uses a truncated Newton algorithm [5], [8] to minimize a function with variables subject to bounds. This algorithm uses gradient information; it is also called Newton ConjugateGradient. It differs from the NewtonCG method described above as it wraps a C implementation and allows each variable to be given upper and lower bounds.
Constrained Minimization
Method COBYLA uses the Constrained Optimization BY Linear Approximation (COBYLA) method [9], [10], [11]. The algorithm is based on linear approximations to the objective function and each constraint. The method wraps a FORTRAN implementation of the algorithm. The constraints functions ‘fun’ may return either a single number or an array or list of numbers.
Method SLSQP uses Sequential Least SQuares Programming to minimize a function of several variables with any combination of bounds, equality and inequality constraints. The method wraps the SLSQP Optimization subroutine originally implemented by Dieter Kraft [12]. Note that the wrapper handles infinite values in bounds by converting them into large floating values.
Method trustconstr is a trustregion algorithm for constrained optimization. It swiches between two implementations depending on the problem definition. It is the most versatile constrained minimization algorithm implemented in SciPy and the most appropriate for largescale problems. For equality constrained problems it is an implementation of ByrdOmojokun TrustRegion SQP method described in [17] and in [5], p. 549. When inequality constraints are imposed as well, it swiches to the trustregion interior point method described in [16]. This interior point algorithm, in turn, solves inequality constraints by introducing slack variables and solving a sequence of equalityconstrained barrier problems for progressively smaller values of the barrier parameter. The previously described equality constrained SQP method is used to solve the subproblems with increasing levels of accuracy as the iterate gets closer to a solution.
FiniteDifference Options
For Method trustconstr the gradient and the Hessian may be approximated using three finitedifference schemes: {‘2point’, ‘3point’, ‘cs’}. The scheme ‘cs’ is, potentially, the most accurate but it requires the function to correctly handles complex inputs and to be differentiable in the complex plane. The scheme ‘3point’ is more accurate than ‘2point’ but requires twice as many operations.
Custom minimizers
It may be useful to pass a custom minimization method, for example
when using a frontend to this method such as scipy.optimize.basinhopping
or a different library. You can simply pass a callable as the method
parameter.
The callable is called as method(fun, x0, args, **kwargs, **options)
where kwargs
corresponds to any other parameters passed to minimize
(such as callback, hess, etc.), except the options dict, which has
its contents also passed as method parameters pair by pair. Also, if
jac has been passed as a bool type, jac and fun are mangled so that
fun returns just the function values and jac is converted to a function
returning the Jacobian. The method shall return an OptimizeResult
object.
The provided method callable must be able to accept (and possibly ignore) arbitrary parameters; the set of parameters accepted by minimize may expand in future versions and then these parameters will be passed to the method. You can find an example in the scipy.optimize tutorial.
New in version 0.11.0.
References
Nelder, J A, and R Mead. 1965. A Simplex Method for Function Minimization. The Computer Journal 7: 30813.
Wright M H. 1996. Direct search methods: Once scorned, now respectable, in Numerical Analysis 1995: Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis (Eds. D F Griffiths and G A Watson). Addison Wesley Longman, Harlow, UK. 191208.
Powell, M J D. 1964. An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal 7: 155162.
Press W, S A Teukolsky, W T Vetterling and B P Flannery. Numerical Recipes (any edition), Cambridge University Press.
Nocedal, J, and S J Wright. 2006. Numerical Optimization. Springer New York.
Byrd, R H and P Lu and J. Nocedal. 1995. A Limited Memory Algorithm for Bound Constrained Optimization. SIAM Journal on Scientific and Statistical Computing 16 (5): 11901208.
Zhu, C and R H Byrd and J Nocedal. 1997. LBFGSB: Algorithm 778: LBFGSB, FORTRAN routines for large scale bound constrained optimization. ACM Transactions on Mathematical Software 23 (4): 550560.
Nash, S G. NewtonType Minimization Via the Lanczos Method. 1984. SIAM Journal of Numerical Analysis 21: 770778.
Powell, M J D. A direct search optimization method that models the objective and constraint functions by linear interpolation. 1994. Advances in Optimization and Numerical Analysis, eds. S. Gomez and JP Hennart, Kluwer Academic (Dordrecht), 5167.
Powell M J D. Direct search algorithms for optimization calculations. 1998. Acta Numerica 7: 287336.
Powell M J D. A view of algorithms for optimization without derivatives. 2007.Cambridge University Technical Report DAMTP 2007/NA03
Kraft, D. A software package for sequential quadratic programming. 1988. Tech. Rep. DFVLRFB 8828, DLR German Aerospace Center – Institute for Flight Mechanics, Koln, Germany.
Conn, A. R., Gould, N. I., and Toint, P. L. Trust region methods. 2000. Siam. pp. 169200.
F. Lenders, C. Kirches, A. Potschka: “trlib: A vectorfree implementation of the GLTR method for iterative solution of the trust region problem”, https://arxiv.org/abs/1611.04718
N. Gould, S. Lucidi, M. Roma, P. Toint: “Solving the TrustRegion Subproblem using the Lanczos Method”, SIAM J. Optim., 9(2), 504–525, (1999).
Byrd, Richard H., Mary E. Hribar, and Jorge Nocedal. 1999. An interior point algorithm for largescale nonlinear programming. SIAM Journal on Optimization 9.4: 877900.
Lalee, Marucha, Jorge Nocedal, and Todd Plantega. 1998. On the implementation of an algorithm for largescale equality constrained optimization. SIAM Journal on Optimization 8.3: 682706.
Examples
Let us consider the problem of minimizing the Rosenbrock function. This function (and its respective derivatives) is implemented in rosen (resp. rosen_der, rosen_hess) in the scipy.optimize.
>>> from scipy.optimize import minimize, rosen, rosen_der
A simple application of the NelderMead method is:
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0, method='NelderMead', tol=1e6)
>>> res.x
array([ 1., 1., 1., 1., 1.])
Now using the BFGS algorithm, using the first derivative and a few options:
>>> res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
... options={'gtol': 1e6, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 26
Function evaluations: 31
Gradient evaluations: 31
>>> res.x
array([ 1., 1., 1., 1., 1.])
>>> print(res.message)
Optimization terminated successfully.
>>> res.hess_inv
array([[ 0.00749589, 0.01255155, 0.02396251, 0.04750988, 0.09495377], # may vary
[ 0.01255155, 0.02510441, 0.04794055, 0.09502834, 0.18996269],
[ 0.02396251, 0.04794055, 0.09631614, 0.19092151, 0.38165151],
[ 0.04750988, 0.09502834, 0.19092151, 0.38341252, 0.7664427 ],
[ 0.09495377, 0.18996269, 0.38165151, 0.7664427, 1.53713523]])
Next, consider a minimization problem with several constraints (namely Example 16.4 from [5]). The objective function is:
>>> fun = lambda x: (x[0]  1)**2 + (x[1]  2.5)**2
There are three constraints defined as:
>>> cons = ({'type': 'ineq', 'fun': lambda x: x[0]  2 * x[1] + 2},
... {'type': 'ineq', 'fun': lambda x: x[0]  2 * x[1] + 6},
... {'type': 'ineq', 'fun': lambda x: x[0] + 2 * x[1] + 2})
And variables must be positive, hence the following bounds:
>>> bnds = ((0, None), (0, None))
The optimization problem is solved using the SLSQP method as:
>>> res = minimize(fun, (2, 0), method='SLSQP', bounds=bnds,
... constraints=cons)
It should converge to the theoretical solution (1.4 ,1.7).
dipy.core.optimize.
sparse_nnls
(y, X, momentum=1, step_size=0.01, non_neg=True, check_error_iter=10, max_error_checks=10, converge_on_sse=0.99)Solve y=Xh for h, using gradient descent, with X a sparse matrix.
The data. Needs to be dense.
The regressors
The persistence of the gradient.
The increment of parameter update in each iteration
Whether to enforce nonnegativity of the solution.
How many rounds to run between error evaluation for convergencechecking.
Don’t check errors more than this number of times if no improvement in rsquared is seen.
a percentage improvement in SSE that is required each time to say that things are still going well.
dipy.core.optimize.
spdot
(A, B)The same as np.dot(A, B), except it works even if A or B or both are sparse matrices.
Profiler
dipy.core.profile.
Profiler
(call=None, *args)Bases: object
Profile python/cython files or functions
If you are profiling cython code you need to add # cython: profile=True on the top of your .pyx file
and for the functions that you do not want to profile you can use this decorator in your cython files
@cython.profile(False)
References
http://docs.cython.org/src/tutorial/profiling_tutorial.html http://docs.python.org/library/profile.html http://packages.python.org/line_profiler/
Examples
from dipy.core.profile import Profiler import numpy as np p=Profiler(np.sum,np.random.rand(1000000,3)) fname=’test.py’ p=Profiler(fname) p.print_stats(10) p.print_stats(‘det’)
Methods

Print stats for profiling 
print_stats
(self, N=10)Print stats for profiling
You can use it in all different ways developed in pstats for example print_stats(10) will give you the 10 slowest calls or print_stats(‘function_name’) will give you the stats for all the calls with name ‘function_name’
dipy.core.profile.
optional_package
(name, trip_msg=None)Return packagelike thing and module setup for package name
package name
message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.
TripWire
instanceIf we can import the package, return it. Otherwise return an object raising an error when accessed
True if import for package was successful, false otherwise
callable usually set as setup_module
in calling namespace, to allow
skipping tests.
Examples
Typical use would be something like this at the top of a module using an optional package:
>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')
Of course in this case the package doesn’t exist, and so, in the module:
>>> have_pkg
False
and
>>> pkg.some_function()
Traceback (most recent call last):
...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError
If the module does exist  we get the module
>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True
Or a submodule if that’s what we asked for
>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True
dipy.core.rng.
LEcuyer
(s1=100001, s2=200002)Return a LEcuyer random number generator.
Generate uniformly distributed random numbers using the 32bit generator from figure 3 of:
L’Ecuyer, P. Efficient and portable combined random number generators, C.A.C.M., vol. 31, 742749 & 774?, June 1988.
The cycle length is claimed to be 2.30584E+18
First seed value. Should not be null. (default 100001)
Second seed value. Should not be null. (default 200002)
pseudorandom number uniformly distributed between [01]
Examples
>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.LEcuyer() for i in range(N)]
dipy.core.rng.
WichmannHill1982
(ix=100001, iy=200002, iz=300003)Algorithm AS 183 Appl. Statist. (1982) vol.31, no.2.
Returns a pseudorandom number rectangularly distributed between 0 and 1. The cycle length is 6.95E+12 (See page 123 of Applied Statistics (1984) vol.33), not as claimed in the original article.
ix, iy and iz should be set to integer values between 1 and 30000 before the first entry.
Integer arithmetic up to 5212632 is required.
First seed value. Should not be null. (default 100001)
Second seed value. Should not be null. (default 200002)
Third seed value. Should not be null. (default 300003)
pseudorandom number uniformly distributed between [01]
Examples
>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill1982() for i in range(N)]
dipy.core.rng.
WichmannHill2006
(ix=100001, iy=200002, iz=300003, it=400004)Wichmann Hill (2006) random number generator.
B.A. Wichmann, I.D. Hill, Generating good pseudorandom numbers, Computational Statistics & Data Analysis, Volume 51, Issue 3, 1 December 2006, Pages 16141622, ISSN 01679473, DOI: 10.1016/j.csda.2006.05.019. (http://www.sciencedirect.com/science/article/B6V8V4K7F86W2/2/a3a33291b8264e4c882a8f21b6e43351) for advice on generating many sequences for use together, and on alternative algorithms and codes
First seed value. Should not be null. (default 100001)
Second seed value. Should not be null. (default 200002)
Third seed value. Should not be null. (default 300003)
Fourth seed value. Should not be null. (default 400004)
pseudorandom number uniformly distributed between [01]
Examples
>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill2006() for i in range(N)]
dipy.core.rng.
architecture
(executable='/Users/koudoro/miniconda3/envs/dipyenv37/bin/python', bits='', linkage='')Queries the given executable (defaults to the Python interpreter binary) for various architecture information.
Returns a tuple (bits, linkage) which contains information about the bit architecture and the linkage format used for the executable. Both values are returned as strings.
Values that cannot be determined are returned as given by the parameter presets. If bits is given as ‘’, the sizeof(pointer) (or sizeof(long) on Python version < 1.5.2) is used as indicator for the supported pointer size.
The function relies on the system’s “file” command to do the actual work. This is available on most if not all Unix platforms. On some nonUnix platforms where the “file” command does not exist and the executable is set to the Python interpreter binary defaults from _default_architecture are used.
HemiSphere
dipy.core.sphere.
HemiSphere
(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e05)Bases: dipy.core.sphere.Sphere
Points on the unit sphere.
A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.
The HemiSphere can be constructed using one of three conventions:
HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.
See also
Methods

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry 

Create instance from a Sphere 

Create a full Sphere from a HemiSphere 

Create a more subdivided HemiSphere 
edges 

faces 

vertices 
__init__
(self, x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e05)Create a HemiSphere from points
Sphere
dipy.core.sphere.
Sphere
(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)Bases: object
Points on the unit sphere.
The sphere can be constructed using one of three conventions:
Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Methods

Find the index of the vertex in the Sphere closest to the input vector 

Subdivides each face of the sphere into four new faces. 
edges 

faces 

vertices 
__init__
(self, x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)Initialize self. See help(type(self)) for accurate signature.
find_closest
(self, xyz)Find the index of the vertex in the Sphere closest to the input vector
A unit vector
The index into the Sphere.vertices array that gives the closest vertex (in angle).
subdivide
(self, n=1)Subdivides each face of the sphere into four new faces.
New vertices are created at a, b, and c. Then each face [x, y, z] is divided into faces [x, a, c], [y, a, b], [z, b, c], and [a, b, c].
y
/ / a/____
/\ / / \ / /____\/____ x c z
The number of subdivisions to preform.
The subdivided sphere.
dipy.core.sphere.
auto_attr
(func)Decorator to create OneTimeProperty attributes.
The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.
Examples
>>> class MagicProp(object):
... @auto_attr
... def a(self):
... return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True
dipy.core.sphere.
cart2sphere
(x, y, z)Return angles for Cartesian 3D coordinates x, y, and z
See doc for sphere2cart
for angle conventions and derivation
of the formulae.
\(0\le\theta\mathrm{(theta)}\le\pi\) and \(\pi\le\phi\mathrm{(phi)}\le\pi\)
x coordinate in Cartesian space
y coordinate in Cartesian space
z coordinate
radius
inclination (polar) angle
azimuth angle
dipy.core.sphere.
disperse_charges
(hemi, iters, const=0.2)Models electrostatic repulsion on the unit sphere
Places charges on a sphere and simulates the repulsive forces felt by each one. Allows the charges to move for some number of iterations and returns their final location as well as the total potential of the system at each step.
Points on a unit sphere.
Number of iterations to run.
Using a smaller const could provide a more accurate result, but will need more iterations to converge.
Distributed points on a unit sphere.
The electrostatic potential at each iteration. This can be useful to check if the repulsion converged to a minimum.
Notes
This function is meant to be used with diffusion imaging so antipodal symmetry is assumed. Therefor each charge must not only be unique, but if there is a charge at +x, there cannot be a charge at x. These are treated as the same location and because the distance between the two charges will be zero, the result will be unstable.
dipy.core.sphere.
disperse_charges_alt
(init_pointset, iters, tol=0.001)Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp.
Points on a unit sphere.
Number of iterations to run.
Tolerance for the optimization.
Distributed points on a unit sphere.
dipy.core.sphere.
euler_characteristic_check
(sphere, chi=2)Checks the euler characteristic of a sphere
If \(f\) = number of faces, \(e\) = number_of_edges and \(v\) = number of vertices, the Euler formula says \(fe+v = 2\) for a mesh on a sphere. More generally, whether \(f e + v == \chi\) where \(\chi\) is the Euler characteristic of the mesh.
Open chain (track) has \(\chi=1\)
Closed chain (loop) has \(\chi=0\)
Disk has \(\chi=1\)
Sphere has \(\chi=2\)
HemiSphere has \(\chi=1\)
A Sphere instance with vertices, edges and faces attributes.
The Euler characteristic of the mesh to be checked
True if the mesh has Euler characteristic \(\chi\)
Examples
>>> euler_characteristic_check(unit_octahedron)
True
>>> hemisphere = HemiSphere.from_sphere(unit_icosahedron)
>>> euler_characteristic_check(hemisphere, chi=1)
True
dipy.core.sphere.
remove_similar_vertices
()Remove vertices that are less than theta degrees from any other
Returns vertices that are at least theta degrees from any other vertex. Vertex v and v are considered the same so if v and v are both in vertices only one is kept. Also if v and w are both in vertices, w must be separated by theta degrees from both v and v to be unique.
N unit vectors.
The minimum separation between vertices in degrees.
If True, return mapping as well as vertices and maybe indices (see below).
If True, return indices as well as vertices and maybe mapping (see below).
Vertices sufficiently separated from one another.
For each element vertices[i]
(\(i \in 0..N1\)), the index \(j\) to a
vertex in unique_vertices that is less than theta degrees from
vertices[i]
. Only returned if return_mapping is True.
indices gives the reverse of mapping. For each element
unique_vertices[j]
(\(j \in 0..M1\)), the index \(i\) to a vertex in
vertices that is less than theta degrees from
unique_vertices[j]
. If there is more than one element of
vertices that is less than theta degrees from unique_vertices[j],
return the first (lowest index) matching value. Only return if
return_indices is True.
dipy.core.sphere.
sphere2cart
(r, theta, phi)Spherical to Cartesian coordinates
This is the standard physics convention where theta is the inclination (polar) angle, and phi is the azimuth angle.
Imagine a sphere with center (0,0,0). Orient it with the z axis running southnorth, the y axis running westeast and the x axis from posterior to anterior. theta (the inclination angle) is the angle to rotate from the zaxis (the zenith) around the yaxis, towards the x axis. Thus the rotation is counterclockwise from the point of view of positive y. phi (azimuth) gives the angle of rotation around the zaxis towards the y axis. The rotation is counterclockwise from the point of view of positive z.
Equivalently, given a point P on the sphere, with coordinates x, y, z, theta is the angle between P and the zaxis, and phi is the angle between the projection of P onto the XY plane, and the X axis.
Geographical nomenclature designates theta as ‘colatitude’, and phi as ‘longitude’
radius
inclination or polar angle
azimuth angle
x coordinate(s) in Cartesion space
y coordinate(s) in Cartesian space
z coordinate
Notes
See these pages:
for excellent discussion of the many different conventions possible. Here we use the physics conventions, used in the wikipedia page.
Derivations of the formulae are simple. Consider a vector x, y, z of length r (norm of x, y, z). The inclination angle (theta) can be found from: cos(theta) == z / r > z == r * cos(theta). This gives the hypotenuse of the projection onto the XY plane, which we will call Q. Q == r*sin(theta). Now x / Q == cos(phi) > x == r * sin(theta) * cos(phi) and so on.
We have deliberately named this function sphere2cart
rather than
sph2cart
to distinguish it from the Matlab function of that
name, because the Matlab function uses an unusual convention for the
angles that we did not want to replicate. The Matlab function is
trivial to implement with the formulae given in the Matlab help.
dipy.core.sphere.
unique_edges
(faces, return_mapping=False)Extract all unique edges from given triangular faces.
Vertex indices forming triangular faces.
If true, a mapping to the edges of each face is returned.
Unique edges.
For each face, [x, y, z], a mapping to it’s edges [a, b, c].
y
/ / a/
/ / /__________ x c z
dipy.core.sphere.
unique_sets
(sets, return_inverse=False)Remove duplicate sets.
N sets of size k.
If True, also returns the indices of unique_sets that can be used to reconstruct sets (the original ordering of each set may not be preserved).
Unique sets.
The indices to reconstruct sets from unique_sets.
dipy.core.sphere.
vector_norm
(vec, axis=1, keepdims=False)Return vector Euclidean (L2) norm
See unit vector and Euclidean norm
Vectors to norm.
Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.
If True, the output will have the same number of dimensions as vec, with shape 1 on axis.
Euclidean norms of vectors.
Examples
>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17., 85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
[ 85.]])
>>> vector_norm(vec, axis=0)
array([ 8., 39., 77.])
permutations
dipy.core.sphere_stats.
permutations
Bases: object
permutations(iterable[, r]) –> permutations object
Return successive rlength permutations of elements in the iterable.
permutations(range(3), 2) –> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)
dipy.core.sphere_stats.
angular_similarity
(S, T)Computes the cosine distance of the best match between points of two sets of vectors S and T
Examples
>>> import numpy as np
>>> from dipy.core.sphere_stats import angular_similarity
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> angular_similarity(S,T)
3.0
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,np.sqrt(2)/2.,np.sqrt(2)/2.],[0,0,1]])
>>> angular_similarity(S,T)
2.7071067811865475
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0]])
>>> angular_similarity(S,T)
1.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,0,1]])
>>> angular_similarity(S,T)
0.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
Now we use print
to reduce the precision of of the printed output
(so the doctests don’t detect unimportant differences)
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0],[0,0,1]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
dipy.core.sphere_stats.
compare_orientation_sets
(S, T)Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity)
First set of vectors.
Second set of vectors.
Maximum mean cosine distance.
Examples
>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
dipy.core.sphere_stats.
eigenstats
(points, alpha=0.05)Principal direction and confidence ellipse
Implements equations in section 6.3.1(ii) of Fisher, Lewis and Embleton, supplemented by equations in section 3.2.5.
array of points on the sphere of radius 1 in \(\mathbb{R}^3\)
1 minus the coverage for the confidence ellipsoid, e.g. 0.05 for 95% coverage.
centre of ellipsoid
lengths of semiaxes of ellipsoid
dipy.core.sphere_stats.
random_uniform_on_sphere
(n=1, coords='xyz')Random unit vectors from a uniform distribution on the sphere.
Number of random vectors
‘xyz’ for cartesian form ‘radians’ for spherical form in rads ‘degrees’ for spherical form in degrees
Uniformly distributed vectors on the unit sphere.
Notes
The uniform distribution on the sphere, parameterized by spherical coordinates \((\theta, \phi)\), should verify \(\phi\sim U[0,2\pi]\), while \(z=\cos(\theta)\sim U[1,1]\).
References
Examples
>>> from dipy.core.sphere_stats import random_uniform_on_sphere
>>> X = random_uniform_on_sphere(4, 'radians')
>>> X.shape == (4, 2)
True
>>> X = random_uniform_on_sphere(4, 'xyz')
>>> X.shape == (4, 3)
True
HemiSphere
dipy.core.subdivide_octahedron.
HemiSphere
(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e05)Bases: dipy.core.sphere.Sphere
Points on the unit sphere.
A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.
The HemiSphere can be constructed using one of three conventions:
HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.
See also
Sphere
Methods

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry 

Create instance from a Sphere 

Create a full Sphere from a HemiSphere 

Create a more subdivided HemiSphere 
edges 

faces 

vertices 
__init__
(self, x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e05)Create a HemiSphere from points
dipy.core.subdivide_octahedron.
create_unit_hemisphere
(recursion_level=2)Creates a unit sphere by subdividing a unit octahedron, returns half the sphere.
Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \((4^recursion_level+2)/2\) vertices.
Half of a unit sphere.
See also
create_unit_sphere
, Sphere
, HemiSphere
dipy.core.subdivide_octahedron.
create_unit_sphere
(recursion_level=2)Creates a unit sphere by subdividing a unit octahedron.
Starts with a unit octahedron and subdivides the faces, projecting the resulting points onto the surface of a unit sphere.
Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \(4^recursion_level+2\) vertices.
The unit sphere.
See also
create_unit_hemisphere
, Sphere
dipy.core.wavelet.
afb3D
(x, af1, af2=None, af3=None)3D Analysis Filter Bank
N1 by N2 by N3 array matrix, where 1) N1, N2, N3 all even 2) N1 >= 2*len(af1) 3) N2 >= 2*len(af2) 4) N3 >= 2*len(af3)
analysis filters for dimension i afi[:, 1]  lowpass filter afi[:, 2]  highpass filter
lowpass subband
highpass subbands, h[d] d = 1..7
dipy.core.wavelet.
afb3D_A
(x, af, d)(along one dimension only)
(Ni are even)
analysis filter for the columns af[:, 1]  lowpass filter af[:, 2]  highpass filter
dimension of filtering (d = 1, 2 or 3)
lowpass subbands
highpass subbands