nn

bench([label, verbose, extra_argv])

Run benchmarks for module using nose.

test([label, verbose, extra_argv, doctests, …])

Run tests for module using nose.

Module: nn.model

LooseVersion([vstring])

Version numbering for anarchists and software realists.

MultipleLayerPercepton([input_shape, …])

Methods

SingleLayerPerceptron([input_shape, …])

Methods

optional_package(name[, trip_msg])

Return package-like thing and module setup for package name

bench

dipy.nn.bench(label='fast', verbose=1, extra_argv=None)

Run benchmarks for module using nose.

Parameters
label{‘fast’, ‘full’, ‘’, attribute identifier}, optional

Identifies the benchmarks to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:

  • ‘fast’ - the default - which corresponds to the nosetests -A option of ‘not slow’.

  • ‘full’ - fast (as above) and slow benchmarks as in the ‘no -A’ option to nosetests - this is the same as ‘’.

  • None or ‘’ - run all tests.

  • attribute_identifier - string passed directly to nosetests as ‘-A’.

verboseint, optional

Verbosity value for benchmark outputs, in the range 1-10. Default is 1.

extra_argvlist, optional

List with any extra arguments to pass to nosetests.

Returns
successbool

Returns True if running the benchmarks works, False if an error occurred.

Notes

Benchmarks are like tests, but have names starting with “bench” instead of “test”, and can be found under the “benchmarks” sub-directory of the module.

Each NumPy module exposes bench in its namespace to run all benchmarks for it.

Examples

>>> success = np.lib.bench() 
Running benchmarks for numpy.lib
...
using 562341 items:
unique:
0.11
unique1d:
0.11
ratio: 1.0
nUnique: 56230 == 56230
...
OK
>>> success 
True

test

dipy.nn.test(label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, raise_warnings=None, timer=False)

Run tests for module using nose.

Parameters
label{‘fast’, ‘full’, ‘’, attribute identifier}, optional

Identifies the tests to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:

  • ‘fast’ - the default - which corresponds to the nosetests -A option of ‘not slow’.

  • ‘full’ - fast (as above) and slow tests as in the ‘no -A’ option to nosetests - this is the same as ‘’.

  • None or ‘’ - run all tests.

  • attribute_identifier - string passed directly to nosetests as ‘-A’.

verboseint, optional

Verbosity value for test outputs, in the range 1-10. Default is 1.

extra_argvlist, optional

List with any extra arguments to pass to nosetests.

doctestsbool, optional

If True, run doctests in module. Default is False.

coveragebool, optional

If True, report coverage of NumPy code. Default is False. (This requires the coverage module).

raise_warningsNone, str or sequence of warnings, optional

This specifies which warnings to configure as ‘raise’ instead of being shown once during the test execution. Valid strings are:

  • “develop” : equals (Warning,)

  • “release” : equals (), do not raise on any warnings.

timerbool or int, optional

Timing of individual tests with nose-timer (which needs to be installed). If True, time tests and report on all of them. If an integer (say N), report timing results for N slowest tests.

Returns
resultobject

Returns the result of running the tests as a nose.result.TextTestResult object.

Notes

Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:

>>> np.lib.test() 

Examples

>>> result = np.lib.test() 
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s

OK

>>> result.errors 
[]
>>> result.knownfail 
[]

LooseVersion

class dipy.nn.model.LooseVersion(vstring=None)

Bases: distutils.version.Version

Version numbering for anarchists and software realists. Implements the standard interface for version number classes as described above. A version number consists of a series of numbers, separated by either periods or strings of letters. When comparing version numbers, the numeric components will be compared numerically, and the alphabetic components lexically. The following are all valid version numbers, in no particular order:

1.5.1 1.5.2b2 161 3.10a 8.02 3.4j 1996.07.12 3.2.pl0 3.1.1.6 2g6 11g 0.960923 2.2beta29 1.13++ 5.5.kw 2.0b1pl0

In fact, there is no such thing as an invalid version number under this scheme; the rules for comparison are simple and predictable, but may not always give the results you want (for some definition of “want”).

Methods

parse

__init__(self, vstring=None)

Initialize self. See help(type(self)) for accurate signature.

component_re = re.compile('(\\d+ | [a-z]+ | \\.)', re.VERBOSE)
parse(self, vstring)

MultipleLayerPercepton

class dipy.nn.model.MultipleLayerPercepton(input_shape=(28, 28), num_hidden=[128], act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', loss='sparse_categorical_crossentropy', optimizer='adam')

Bases: object

Methods

evaluate(self, x_test, y_test[, verbose])

Evaluate the model on test dataset.

fit(self, x_train, y_train[, epochs])

Train the model on train dataset.

predict(self, x_test)

Predict the output from input samples.

summary(self)

Get the summary of the model.

__init__(self, input_shape=(28, 28), num_hidden=[128], act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', loss='sparse_categorical_crossentropy', optimizer='adam')

Multiple Layer Perceptron with Dropout.

Parameters
input_shapetuple

Shape of data to be trained

num_hiddenlist

List of number of nodes in hidden layers

act_hiddenstring

Activation function used in hidden layer

dropoutfloat

Dropout ratio

num_out10

Number of nodes in output layer

act_outstring

Activation function used in output layer

optimizerstring

Select optimizer. Default adam.

lossstring

Select loss function for measuring accuracy. Default sparse_categorical_crossentropy.

evaluate(self, x_test, y_test, verbose=2)

Evaluate the model on test dataset.

The evaluate method will evaluate the model on a test dataset.

Parameters
x_testndarray

the x_test is the test dataset

y_testndarray shape=(BatchSize,)

the y_test is the labels of the test dataset

verboseint (Default = 2)

By setting verbose 0, 1 or 2 you just say how do you want to ‘see’ the training progress for each epoch.

Returns
evaluateList

return list of loss value and accuracy value on test dataset

fit(self, x_train, y_train, epochs=5)

Train the model on train dataset.

The fit method will train the model for a fixed number of epochs (iterations) on a dataset.

Parameters
x_trainndarray

the x_train is the train dataset

y_trainndarray shape=(BatchSize,)

the y_train is the labels of the train dataset

epochsint (Default = 5)

the number of epochs

Returns
histobject

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs

predict(self, x_test)

Predict the output from input samples.

The predict method will generates output predictions for the input samples.

Parameters
x_trainndarray

the x_test is the test dataset or input samples

Returns
predictndarray shape(TestSize,OutputSize)

Numpy array(s) of predictions.

summary(self)

Get the summary of the model.

The summary is textual and includes information about: The layers and their order in the model. The output shape of each layer.

Returns
summaryNoneType

the summary of the model

SingleLayerPerceptron

class dipy.nn.model.SingleLayerPerceptron(input_shape=(28, 28), num_hidden=128, act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', optimizer='adam', loss='sparse_categorical_crossentropy')

Bases: object

Methods

evaluate(self, x_test, y_test[, verbose])

Evaluate the model on test dataset.

fit(self, x_train, y_train[, epochs])

Train the model on train dataset.

predict(self, x_test)

Predict the output from input samples.

summary(self)

Get the summary of the model.

__init__(self, input_shape=(28, 28), num_hidden=128, act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', optimizer='adam', loss='sparse_categorical_crossentropy')

Single Layer Perceptron with Dropout.

Parameters
input_shapetuple

Shape of data to be trained

num_hiddenint

Number of nodes in hidden layer

act_hiddenstring

Activation function used in hidden layer

dropoutfloat

Dropout ratio

num_out10

Number of nodes in output layer

act_outstring

Activation function used in output layer

optimizerstring

Select optimizer. Default adam.

lossstring

Select loss function for measuring accuracy. Default sparse_categorical_crossentropy.

evaluate(self, x_test, y_test, verbose=2)

Evaluate the model on test dataset.

The evaluate method will evaluate the model on a test dataset.

Parameters
x_testndarray

the x_test is the test dataset

y_testndarray shape=(BatchSize,)

the y_test is the labels of the test dataset

verboseint (Default = 2)

By setting verbose 0, 1 or 2 you just say how do you want to ‘see’ the training progress for each epoch.

Returns
evaluateList

return list of loss value and accuracy value on test dataset

fit(self, x_train, y_train, epochs=5)

Train the model on train dataset.

The fit method will train the model for a fixed number of epochs (iterations) on a dataset.

Parameters
x_trainndarray

the x_train is the train dataset

y_trainndarray shape=(BatchSize,)

the y_train is the labels of the train dataset

epochsint (Default = 5)

the number of epochs

Returns
histobject

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs

predict(self, x_test)

Predict the output from input samples.

The predict method will generates output predictions for the input samples.

Parameters
x_trainndarray

the x_test is the test dataset or input samples

Returns
predictndarray shape(TestSize,OutputSize)

Numpy array(s) of predictions.

summary(self)

Get the summary of the model.

The summary is textual and includes information about: The layers and their order in the model. The output shape of each layer.

Returns
summaryNoneType

the summary of the model

optional_package

dipy.nn.model.optional_package(name, trip_msg=None)

Return package-like thing and module setup for package name

Parameters
namestr

package name

trip_msgNone or str

message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.

Returns
pkg_likemodule or TripWire instance

If we can import the package, return it. Otherwise return an object raising an error when accessed

have_pkgbool

True if import for package was successful, false otherwise

module_setupfunction

callable usually set as setup_module in calling namespace, to allow skipping tests.

Examples

Typical use would be something like this at the top of a module using an optional package:

>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')

Of course in this case the package doesn’t exist, and so, in the module:

>>> have_pkg
False

and

>>> pkg.some_function() 
Traceback (most recent call last):
    ...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError

If the module does exist - we get the module

>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True

Or a submodule if that’s what we asked for

>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True