nn
|
Run benchmarks for module using nose. |
|
Run tests for module using nose. |
nn.model
|
Version numbering for anarchists and software realists. |
|
Methods |
|
Methods |
|
Return package-like thing and module setup for package name |
dipy.nn.
bench
(label='fast', verbose=1, extra_argv=None)Run benchmarks for module using nose.
Identifies the benchmarks to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:
‘fast’ - the default - which corresponds to the nosetests -A
option of ‘not slow’.
‘full’ - fast (as above) and slow benchmarks as in the ‘no -A’ option to nosetests - this is the same as ‘’.
None or ‘’ - run all tests.
attribute_identifier - string passed directly to nosetests as ‘-A’.
Verbosity value for benchmark outputs, in the range 1-10. Default is 1.
List with any extra arguments to pass to nosetests.
Returns True if running the benchmarks works, False if an error occurred.
Notes
Benchmarks are like tests, but have names starting with “bench” instead of “test”, and can be found under the “benchmarks” sub-directory of the module.
Each NumPy module exposes bench in its namespace to run all benchmarks for it.
Examples
>>> success = np.lib.bench()
Running benchmarks for numpy.lib
...
using 562341 items:
unique:
0.11
unique1d:
0.11
ratio: 1.0
nUnique: 56230 == 56230
...
OK
>>> success
True
dipy.nn.
test
(label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, raise_warnings=None, timer=False)Run tests for module using nose.
Identifies the tests to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:
‘fast’ - the default - which corresponds to the nosetests -A
option of ‘not slow’.
‘full’ - fast (as above) and slow tests as in the ‘no -A’ option to nosetests - this is the same as ‘’.
None or ‘’ - run all tests.
attribute_identifier - string passed directly to nosetests as ‘-A’.
Verbosity value for test outputs, in the range 1-10. Default is 1.
List with any extra arguments to pass to nosetests.
If True, run doctests in module. Default is False.
If True, report coverage of NumPy code. Default is False. (This requires the coverage module).
This specifies which warnings to configure as ‘raise’ instead of being shown once during the test execution. Valid strings are:
“develop” : equals (Warning,)
“release” : equals ()
, do not raise on any warnings.
Timing of individual tests with nose-timer
(which needs to be
installed). If True, time tests and report on all of them.
If an integer (say N
), report timing results for N
slowest
tests.
Returns the result of running the tests as a
nose.result.TextTestResult
object.
Notes
Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:
>>> np.lib.test()
Examples
>>> result = np.lib.test()
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s
OK
>>> result.errors
[]
>>> result.knownfail
[]
LooseVersion
dipy.nn.model.
LooseVersion
(vstring=None)Bases: distutils.version.Version
Version numbering for anarchists and software realists. Implements the standard interface for version number classes as described above. A version number consists of a series of numbers, separated by either periods or strings of letters. When comparing version numbers, the numeric components will be compared numerically, and the alphabetic components lexically. The following are all valid version numbers, in no particular order:
1.5.1 1.5.2b2 161 3.10a 8.02 3.4j 1996.07.12 3.2.pl0 3.1.1.6 2g6 11g 0.960923 2.2beta29 1.13++ 5.5.kw 2.0b1pl0
In fact, there is no such thing as an invalid version number under this scheme; the rules for comparison are simple and predictable, but may not always give the results you want (for some definition of “want”).
Methods
parse |
MultipleLayerPercepton
dipy.nn.model.
MultipleLayerPercepton
(input_shape=(28, 28), num_hidden=[128], act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', loss='sparse_categorical_crossentropy', optimizer='adam')Bases: object
Methods
|
Evaluate the model on test dataset. |
|
Train the model on train dataset. |
|
Predict the output from input samples. |
|
Get the summary of the model. |
__init__
(input_shape=(28, 28), num_hidden=[128], act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', loss='sparse_categorical_crossentropy', optimizer='adam')Multiple Layer Perceptron with Dropout.
Shape of data to be trained
List of number of nodes in hidden layers
Activation function used in hidden layer
Dropout ratio
Number of nodes in output layer
Activation function used in output layer
Select optimizer. Default adam.
Select loss function for measuring accuracy. Default sparse_categorical_crossentropy.
evaluate
(x_test, y_test, verbose=2)Evaluate the model on test dataset.
The evaluate method will evaluate the model on a test dataset.
the x_test is the test dataset
the y_test is the labels of the test dataset
By setting verbose 0, 1 or 2 you just say how do you want to ‘see’ the training progress for each epoch.
return list of loss value and accuracy value on test dataset
fit
(x_train, y_train, epochs=5)Train the model on train dataset.
The fit method will train the model for a fixed number of epochs (iterations) on a dataset.
the x_train is the train dataset
the y_train is the labels of the train dataset
the number of epochs
A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs
SingleLayerPerceptron
dipy.nn.model.
SingleLayerPerceptron
(input_shape=(28, 28), num_hidden=128, act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', optimizer='adam', loss='sparse_categorical_crossentropy')Bases: object
Methods
|
Evaluate the model on test dataset. |
|
Train the model on train dataset. |
|
Predict the output from input samples. |
|
Get the summary of the model. |
__init__
(input_shape=(28, 28), num_hidden=128, act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', optimizer='adam', loss='sparse_categorical_crossentropy')Single Layer Perceptron with Dropout.
Shape of data to be trained
Number of nodes in hidden layer
Activation function used in hidden layer
Dropout ratio
Number of nodes in output layer
Activation function used in output layer
Select optimizer. Default adam.
Select loss function for measuring accuracy. Default sparse_categorical_crossentropy.
evaluate
(x_test, y_test, verbose=2)Evaluate the model on test dataset.
The evaluate method will evaluate the model on a test dataset.
the x_test is the test dataset
the y_test is the labels of the test dataset
By setting verbose 0, 1 or 2 you just say how do you want to ‘see’ the training progress for each epoch.
return list of loss value and accuracy value on test dataset
fit
(x_train, y_train, epochs=5)Train the model on train dataset.
The fit method will train the model for a fixed number of epochs (iterations) on a dataset.
the x_train is the train dataset
the y_train is the labels of the train dataset
the number of epochs
A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs
dipy.nn.model.
optional_package
(name, trip_msg=None)Return package-like thing and module setup for package name
package name
message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.
TripWire
instanceIf we can import the package, return it. Otherwise return an object raising an error when accessed
True if import for package was successful, false otherwise
callable usually set as setup_module
in calling namespace, to allow
skipping tests.
Examples
Typical use would be something like this at the top of a module using an optional package:
>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')
Of course in this case the package doesn’t exist, and so, in the module:
>>> have_pkg
False
and
>>> pkg.some_function()
Traceback (most recent call last):
...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError
If the module does exist - we get the module
>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True
Or a submodule if that’s what we asked for
>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True