nn

Run benchmarks for module using nose. 

Run tests for module using nose. 
nn.histo_resdnn
Class and helper functions for fitting the Histological ResDNN model.

Layer that adds a list of inputs. 

Just your regular denselyconnected NN layer. 

Points on the unit sphere. 

This class is intended for the ResDNN Histology Network model. 

Model groups layers into an object with training and inference features. 



Input() is used to instantiate a Keras tensor. 

Decorator replaces custom skip test markup in doctests. 

Get indices where the bvalue is bval 

Provide full paths to example or test datasets. 

provide triangulated spheres 

Return packagelike thing and module setup for package name 

Change the logger of the HistoResDNN to one on the following: DEBUG, INFO, WARNING, CRITICAL, ERROR 

Spherical function to spherical harmonics (SH). 

Spherical harmonics (SH) to spherical function (SF). 

Returns the degree ( 

This function gives the unique rounded bvalues of the data 
nn.model

Methods 

Methods 



Return packagelike thing and module setup for package name 
Run benchmarks for module using nose.
Identifies the benchmarks to run. This can be a string to pass to the nosetests executable with the ‘A’ option, or one of several special values. Special values are:
‘fast’  the default  which corresponds to the nosetests A
option of ‘not slow’.
‘full’  fast (as above) and slow benchmarks as in the ‘no A’ option to nosetests  this is the same as ‘’.
None or ‘’  run all tests.
attribute_identifier  string passed directly to nosetests as ‘A’.
Verbosity value for benchmark outputs, in the range 110. Default is 1.
List with any extra arguments to pass to nosetests.
Returns True if running the benchmarks works, False if an error occurred.
Notes
Benchmarks are like tests, but have names starting with “bench” instead of “test”, and can be found under the “benchmarks” subdirectory of the module.
Each NumPy module exposes bench in its namespace to run all benchmarks for it.
Examples
>>> success = np.lib.bench()
Running benchmarks for numpy.lib
...
using 562341 items:
unique:
0.11
unique1d:
0.11
ratio: 1.0
nUnique: 56230 == 56230
...
OK
>>> success
True
Run tests for module using nose.
Identifies the tests to run. This can be a string to pass to the nosetests executable with the ‘A’ option, or one of several special values. Special values are:
‘fast’  the default  which corresponds to the nosetests A
option of ‘not slow’.
‘full’  fast (as above) and slow tests as in the ‘no A’ option to nosetests  this is the same as ‘’.
None or ‘’  run all tests.
attribute_identifier  string passed directly to nosetests as ‘A’.
Verbosity value for test outputs, in the range 110. Default is 1.
List with any extra arguments to pass to nosetests.
If True, run doctests in module. Default is False.
If True, report coverage of NumPy code. Default is False. (This requires the coverage module).
This specifies which warnings to configure as ‘raise’ instead of being shown once during the test execution. Valid strings are:
“develop” : equals (Warning,)
“release” : equals ()
, do not raise on any warnings.
Timing of individual tests with nosetimer
(which needs to be
installed). If True, time tests and report on all of them.
If an integer (say N
), report timing results for N
slowest
tests.
Returns the result of running the tests as a
nose.result.TextTestResult
object.
Notes
Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:
>>> np.lib.test()
Examples
>>> result = np.lib.test()
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s
OK
>>> result.errors
[]
>>> result.knownfail
[]
Add
Bases: keras.layers.merge._Merge
Layer that adds a list of inputs.
It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
Examples:
>>> input_shape = (2, 3, 4)
>>> x1 = tf.random.normal(input_shape)
>>> x2 = tf.random.normal(input_shape)
>>> y = tf.keras.layers.Add()([x1, x2])
>>> print(y.shape)
(2, 3, 4)
Used in a functional model:
>>> input1 = tf.keras.layers.Input(shape=(16,))
>>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1)
>>> input2 = tf.keras.layers.Input(shape=(32,))
>>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2)
>>> # equivalent to `added = tf.keras.layers.add([x1, x2])`
>>> added = tf.keras.layers.Add()([x1, x2])
>>> out = tf.keras.layers.Dense(4)(added)
>>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)
activity_regularizer
Optional regularizer function for the output of this layer.
compute_dtype
The dtype of the layer’s computations.
dtype
The dtype of the layer weights.
dtype_policy
The dtype policy associated with this layer.
dynamic
Whether the layer is dynamic (eageronly); set in the constructor.
inbound_nodes
Deprecated, do NOT use! Only for compatibility with external Keras.
input
Retrieves the input tensor(s) of a layer.
input_mask
Retrieves the input mask tensor(s) of a layer.
input_shape
Retrieves the input shape(s) of a layer.
input_spec
InputSpec instance(s) describing the input format for this layer.
losses
List of losses added using the add_loss() API.
metrics
List of metrics added using the add_metric() API.
name
Name of the layer (string), set in the constructor.
name_scope
Returns a tf.name_scope instance for this class.
non_trainable_variables
Sequence of nontrainable variables owned by this module and its submodules.
non_trainable_weights
List of all nontrainable weights tracked by this layer.
outbound_nodes
Deprecated, do NOT use! Only for compatibility with external Keras.
output
Retrieves the output tensor(s) of a layer.
output_mask
Retrieves the output mask tensor(s) of a layer.
output_shape
Retrieves the output shape(s) of a layer.
submodules
Sequence of all submodules.
supports_masking
Whether this layer supports computing a mask using compute_mask.
trainable_variables
Sequence of trainable variables owned by this module and its submodules.
trainable_weights
List of all trainable weights tracked by this layer.
variable_dtype
Alias of Layer.dtype, the dtype of the weights.
variables
Returns the list of all layer variables/weights.
weights
Returns the list of all layer variables/weights.
Methods

Wraps call, applying pre and postprocessing steps. 

Add loss tensor(s), potentially dependent on layer inputs. 

Adds metric tensor to the layer. 

Add update op(s), potentially dependent on layer inputs. 

Deprecated, do NOT use! Alias for add_weight. 

Adds a new variable to the layer. 

Deprecated, do NOT use! 

This is where the layer's logic lives. 

Computes an output mask tensor. 

Compute the output tensor signature of the layer based on the inputs. 

Count the total number of scalars composing the weights. 

Finalizes the layers state after updating layer weights. 

Creates a layer from its config. 

Returns the config of the layer. 

Retrieves the input tensor(s) of a layer at a given node. 

Retrieves the input mask tensor(s) of a layer at a given node. 

Retrieves the input shape(s) of a layer at a given node. 

Deprecated, do NOT use! 

Retrieves the output tensor(s) of a layer at a given node. 

Retrieves the output mask tensor(s) of a layer at a given node. 

Retrieves the output shape(s) of a layer at a given node. 

Deprecated, do NOT use! 

Returns the current weights of the layer, as NumPy arrays. 

Sets the weights of the layer, from NumPy arrays. 

Decorator to automatically enter the module name scope. 
build 

compute_output_shape 
Initializes a Merge layer.
**kwargs: standard layer keyword arguments.
Dense
Bases: keras.engine.base_layer.Layer
Just your regular denselyconnected NN layer.
Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the elementwise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True). These are all attributes of Dense.
Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 0 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every subtensor of shape (1, 1, d1) (there are batch_size * d0 such subtensors). The output in this case will have shape (batch_size, d0, units).
Besides, layer attributes cannot be modified after the layer has been called once (except the trainable attribute). When a popular kwarg input_shape is passed, then keras will create an input layer to insert before the current layer. This can be treated equivalent to explicitly defining an InputLayer.
Example:
>>> # Create a `Sequential` model and add a Dense layer as the first layer.
>>> model = tf.keras.models.Sequential()
>>> model.add(tf.keras.Input(shape=(16,)))
>>> model.add(tf.keras.layers.Dense(32, activation='relu'))
>>> # Now the model will take as input arrays of shape (None, 16)
>>> # and output arrays of shape (None, 32).
>>> # Note that after the first layer, you don't need to specify
>>> # the size of the input anymore:
>>> model.add(tf.keras.layers.Dense(32))
>>> model.output_shape
(None, 32)
units: Positive integer, dimensionality of the output space. activation: Activation function to use.
If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix. bias_initializer: Initializer for the bias vector. kernel_regularizer: Regularizer function applied to
the kernel weights matrix.
bias_regularizer: Regularizer function applied to the bias vector. activity_regularizer: Regularizer function applied to
the output of the layer (its “activation”).
the kernel weights matrix.
bias_constraint: Constraint function applied to the bias vector.
ND tensor with shape: (batch_size, …, input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).
ND tensor with shape: (batch_size, …, units). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, units).
activity_regularizer
Optional regularizer function for the output of this layer.
compute_dtype
The dtype of the layer’s computations.
dtype
The dtype of the layer weights.
dtype_policy
The dtype policy associated with this layer.
dynamic
Whether the layer is dynamic (eageronly); set in the constructor.
inbound_nodes
Deprecated, do NOT use! Only for compatibility with external Keras.
input
Retrieves the input tensor(s) of a layer.
input_mask
Retrieves the input mask tensor(s) of a layer.
input_shape
Retrieves the input shape(s) of a layer.
input_spec
InputSpec instance(s) describing the input format for this layer.
losses
List of losses added using the add_loss() API.
metrics
List of metrics added using the add_metric() API.
name
Name of the layer (string), set in the constructor.
name_scope
Returns a tf.name_scope instance for this class.
non_trainable_variables
Sequence of nontrainable variables owned by this module and its submodules.
non_trainable_weights
List of all nontrainable weights tracked by this layer.
outbound_nodes
Deprecated, do NOT use! Only for compatibility with external Keras.
output
Retrieves the output tensor(s) of a layer.
output_mask
Retrieves the output mask tensor(s) of a layer.
output_shape
Retrieves the output shape(s) of a layer.
submodules
Sequence of all submodules.
supports_masking
Whether this layer supports computing a mask using compute_mask.
trainable_variables
Sequence of trainable variables owned by this module and its submodules.
trainable_weights
List of all trainable weights tracked by this layer.
variable_dtype
Alias of Layer.dtype, the dtype of the weights.
variables
Returns the list of all layer variables/weights.
weights
Returns the list of all layer variables/weights.
Methods

Wraps call, applying pre and postprocessing steps. 

Add loss tensor(s), potentially dependent on layer inputs. 

Adds metric tensor to the layer. 

Add update op(s), potentially dependent on layer inputs. 

Deprecated, do NOT use! Alias for add_weight. 

Adds a new variable to the layer. 

Deprecated, do NOT use! 

Creates the variables of the layer (optional, for subclass implementers). 

This is where the layer's logic lives. 

Computes an output mask tensor. 

Computes the output shape of the layer. 

Compute the output tensor signature of the layer based on the inputs. 

Count the total number of scalars composing the weights. 

Finalizes the layers state after updating layer weights. 

Creates a layer from its config. 
Returns the config of the layer. 


Retrieves the input tensor(s) of a layer at a given node. 

Retrieves the input mask tensor(s) of a layer at a given node. 

Retrieves the input shape(s) of a layer at a given node. 

Deprecated, do NOT use! 

Retrieves the output tensor(s) of a layer at a given node. 

Retrieves the output mask tensor(s) of a layer at a given node. 

Retrieves the output shape(s) of a layer at a given node. 

Deprecated, do NOT use! 

Returns the current weights of the layer, as NumPy arrays. 

Sets the weights of the layer, from NumPy arrays. 

Decorator to automatically enter the module name scope. 
Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a statecreation step inbetween layer instantiation and layer call. It is invoked automatically before the first execution of call().
This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).
TensorShape if the layer expects a list of inputs (one instance per input).
This is where the layer’s logic lives.
The call() method may not create state (except in its first invocation, wrapping the creation of variables or other resources in tf.init_scope()). It is recommended to create state in __init__(), or the build() method that is called automatically before call() executes the first time.
The first positional inputs argument is subject to special rules:  inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value of a keyword argument.
NumPy array or Python scalar values in inputs get cast as tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method) using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs only.
Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.
this is not recommended, for the reasons above.
this is not recommended, for the reasons above. The following optional keyword arguments are reserved:  training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).
A tensor or list/tuple of tensors.
Computes the output shape of the layer.
This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.
or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.
An input shape tuple.
Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.
Python dictionary.
HemiSphere
Bases: dipy.core.sphere.Sphere
Points on the unit sphere.
A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.
The HemiSphere can be constructed using one of three conventions:
HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Vertices as xyz coordinates.
Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.
Vertices as xyz coordinates.
Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.
Edges between vertices. If unspecified, the edges are derived from the faces.
Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.
See also
Sphere
Methods

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry 

Create instance from a Sphere 

Create a full Sphere from a HemiSphere 

Create a more subdivided HemiSphere 
edges 

faces 

vertices 
Create a HemiSphere from points
HistoResDNN
Bases: object
This class is intended for the ResDNN Histology Network model.
Methods
Load the model pretraining weights to use for the fitting. 


Load the custom pretraining weights to use for the fitting. 

Wrapper function to faciliate prediction of larger dataset. 
The model was retrained for usage with a different basis function (‘tournier07’) like the proposed model in [1, 2].
To obtain the pretrained model, use:: >>> resdnn_model = HistoResDNN() >>> fetch_model_weights_path = get_fnames(‘histo_resdnn_weights’) >>> resdnn_model.load_model_weights(fetch_model_weights_path)
This model is designed to take as input raw DWI signal on a sphere (ODF) represented as SH of order 8 in the tournier basis and predict fODF of order 8 in the tournier basis. Effectively, this model is mimicking a CSD fit.
Maximum SH order in the SH fit. For sh_order
, there will be
(sh_order + 1) * (sh_order + 2) / 2
SH coefficients for a
symmetric basis. Default: 8
tournier07
(default) or descoteaux07
.
Whether to show information about the processing. Default: False
References
Nath, V., Schilling, K. G., Parvathaneni, P., Hansen, C. B., Hainline, A. E., Huo, Y., … & Stepniewska, I. (2019). Deep learning reveals untapped information for local whitematter fiber reconstruction in diffusionweighted MRI. Magnetic resonance imaging, 62, 220227.
Nath, V., Schilling, K. G., Hansen, C. B., Parvathaneni, P., Hainline, A. E., Bermudez, C., … & Stępniewska, I. (2019). Deep learning captures more accurate diffusion fiber orientations distributions than constrained spherical deconvolution. arXiv preprint arXiv:1911.07927.
Load the model pretraining weights to use for the fitting. Will not work if the declared SH_ORDER does not match the weights expected input.
Load the custom pretraining weights to use for the fitting. Will not work if the declared SH_ORDER does not match the weights expected input.
get_fnames(‘histo_resdnn_weights’).
Path to the file containing the weights (hdf5, saved by tensorflow)
Wrapper function to faciliate prediction of larger dataset. The function will mask, normalize, split, predict and ‘reassemble’ the data as a volume.
DWI signal in a 4D array
The acquisition scheme matching the data (must contain at least one b0)
Binary mask of the brain to avoid unnecessary computation and unreliable prediction outside the brain. Default: Compute prediction only for nonzero voxels (with at least one nonzero DWI value).
Predicted fODF (as SH). The volume has matching shape to the input
data, but with (sh_order + 1) * (sh_order + 2) / 2
as a last
dimension.
Model
Bases: keras.engine.base_layer.Layer
, keras.utils.version_utils.ModelVersionSelector
Model groups layers into an object with training and inference features.
keras.Input objects.
outputs: The output(s) of the model. See Functional API example below. name: String, the name of the model.
There are two ways to instantiate a Model:
1  With the “Functional API”, where you start from Input, you chain layer calls to specify the model’s forward pass, and finally you create your model from inputs and outputs:
```python import tensorflow as tf
inputs = tf.keras.Input(shape=(3,)) x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs) outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x) model = tf.keras.Model(inputs=inputs, outputs=outputs) ```
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract subcomponents of the model.
Example:
```python inputs = keras.Input(shape=(None, None, 3)) processed = keras.layers.RandomCrop(width=32, height=32)(inputs) conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed) pooling = keras.layers.GlobalAveragePooling2D()(conv) feature = keras.layers.Dense(10)(pooling)
full_model = keras.Model(inputs, feature) backbone = keras.Model(processed, conv) activations = keras.Model(conv, feature) ```
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that are originated from keras.Inputs objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
2  By subclassing the Model class: in that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
```python import tensorflow as tf
class MyModel(tf.keras.Model):
 def __init__(self):
super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
 def call(self, inputs):
x = self.dense1(inputs) return self.dense2(x)
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
```python import tensorflow as tf
class MyModel(tf.keras.Model):
 def __init__(self):
super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) self.dropout = tf.keras.layers.Dropout(0.5)
 def call(self, inputs, training=False):
x = self.dense1(inputs) if training:
x = self.dropout(x, training=training)
return self.dense2(x)
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
activity_regularizer
Optional regularizer function for the output of this layer.
compute_dtype
The dtype of the layer’s computations.
distribute_strategy
The tf.distribute.Strategy this model was created under.
dtype
The dtype of the layer weights.
dtype_policy
The dtype policy associated with this layer.
dynamic
Whether the layer is dynamic (eageronly); set in the constructor.
inbound_nodes
Deprecated, do NOT use! Only for compatibility with external Keras.
input
Retrieves the input tensor(s) of a layer.
input_mask
Retrieves the input mask tensor(s) of a layer.
input_shape
Retrieves the input shape(s) of a layer.
input_spec
InputSpec instance(s) describing the input format for this layer.
losses
List of losses added using the add_loss() API.
metrics
Returns the model’s metrics added using compile(), add_metric() APIs.
metrics_names
Returns the model’s display labels for all outputs.
name
Name of the layer (string), set in the constructor.
name_scope
Returns a tf.name_scope instance for this class.
non_trainable_variables
Sequence of nontrainable variables owned by this module and its submodules.
non_trainable_weights
List of all nontrainable weights tracked by this layer.
outbound_nodes
Deprecated, do NOT use! Only for compatibility with external Keras.
output
Retrieves the output tensor(s) of a layer.
output_mask
Retrieves the output mask tensor(s) of a layer.
output_shape
Retrieves the output shape(s) of a layer.
run_eagerly
Settable attribute indicating whether the model should run eagerly.
state_updates
Deprecated, do NOT use!
submodules
Sequence of all submodules.
supports_masking
Whether this layer supports computing a mask using compute_mask.
trainable_variables
Sequence of trainable variables owned by this module and its submodules.
trainable_weights
List of all trainable weights tracked by this layer.
variable_dtype
Alias of Layer.dtype, the dtype of the weights.
variables
Returns the list of all layer variables/weights.
weights
Returns the list of all layer variables/weights.
Methods

Wraps call, applying pre and postprocessing steps. 

Add loss tensor(s), potentially dependent on layer inputs. 

Adds metric tensor to the layer. 

Add update op(s), potentially dependent on layer inputs. 

Deprecated, do NOT use! Alias for add_weight. 

Adds a new variable to the layer. 

Deprecated, do NOT use! 

Builds the model based on input shapes received. 

Calls the model on new inputs and returns the outputs as tensors. 

Configures the model for training. 

Compute the total loss, validate it, and return it. 

Computes an output mask tensor. 

Update metric states and collect all metrics to be returned. 

Computes the output shape of the layer. 

Compute the output tensor signature of the layer based on the inputs. 

Count the total number of scalars composing the weights. 

Returns the loss value & metrics values for the model in test mode. 

Evaluates the model on a data generator. 

Finalizes the layers state after updating layer weights. 

Trains the model for a fixed number of epochs (iterations on a dataset). 

Fits the model on data yielded batchbybatch by a Python generator. 

Creates a layer from its config. 
Returns the config of the layer. 


Retrieves the input tensor(s) of a layer at a given node. 

Retrieves the input mask tensor(s) of a layer at a given node. 

Retrieves the input shape(s) of a layer at a given node. 

Retrieves a layer based on either its name (unique) or index. 

Deprecated, do NOT use! 

Retrieves the output tensor(s) of a layer at a given node. 

Retrieves the output mask tensor(s) of a layer at a given node. 

Retrieves the output shape(s) of a layer at a given node. 

Deprecated, do NOT use! 
Retrieves the weights of the model. 


Loads all layer weights, either from a TensorFlow or an HDF5 weight file. 

Creates a function that executes one step of inference. 

Creates a function that executes one step of evaluation. 

Creates a function that executes one step of training. 

Generates output predictions for the input samples. 

Generates predictions for the input samples from a data generator. 
Returns predictions for a single batch of samples. 


The logic for one inference step. 
Resets the state of all the metrics in the model. 


Saves the model to Tensorflow SavedModel or a single HDF5 file. 

Returns the tf.TensorSpec of call inputs as a tuple (args, kwargs). 

Saves all layer weights. 

Sets the weights of the layer, from NumPy arrays. 

Prints a string summary of the network. 

Test the model on a single batch of samples. 

The logic for one evaluation step. 

Returns a JSON string containing the network configuration. 

Returns a yaml string containing the network configuration. 

Runs a single gradient update on a single batch of data. 

The logic for one training step. 

Decorator to automatically enter the module name scope. 
reset_states 
Builds the model based on input shapes received.
This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.
This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).
where shapes are tuples, integers, or TensorShape instances.
In case of invalid userprovided data (not of type tuple, list, TensorShape, or dict).
If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).
If not all layers were properly built.
If float type inputs are not supported within the layers.
In each of these cases, the user should build their model by calling it on real tensor data.
Calls the model on new inputs and returns the outputs as tensors.
In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method.
inputs: Input tensor, or dict/list/tuple of input tensors. training: Boolean or boolean scalar tensor, indicating whether to run
the Network in training mode or inference mode.
[here](https://www.tensorflow.org/guide/keras/masking_and_padding).
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
Configures the model for training.
Example:
```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e3),
loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),
tf.keras.metrics.FalseNegatives()])
tf.keras.optimizers.
a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN1) i.e. persample or pertimestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.
and testing. Each of this can be a string (name of a builtin function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multioutput model, you could also pass a dictionary, such as metrics={‘output_a’: ‘accuracy’, ‘output_b’: [‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well.
(Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients.
 If a list, it is expected to have a 1:1 mapping to the model’s
outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.
sample_weight or class_weight during training and testing.
logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy.
during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution).
[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. This option cannot be enabled with run_eagerly=True. Note that jit_compile=True is may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.
**kwargs: Arguments supported for backwards compatibility only.
Compute the total loss, validate it, and return it.
Subclasses can optionally override this method to provide custom loss computation logic.
Example: ```python class MyModel(tf.keras.Model):
 def __init__(self, *args, **kwargs):
super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)
 def compute_loss(self, x, y, y_pred, sample_weight):
loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss
 def reset_metrics(self):
self.loss_tracker.reset_states()
@property def metrics(self):
return [self.loss_tracker]
tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)
inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))
optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```
x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.
The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).
Update metric states and collect all metrics to be returned.
Subclasses can optionally override this method to provide custom metric updating and collection logic.
Example: ```python class MyModel(tf.keras.Sequential):
def compute_metrics(self, x, y, y_pred, sample_weight):
# This super call updates self.compiled_metrics and returns results # for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(
x, y, y_pred, sample_weight)
# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results
x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
Returns the loss value & metrics values for the model in test mode.
Computation is done in batches (see the batch_size arg.)
A Numpy array (or arraylike), or a list of arrays (in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).
A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iteratorlike inputs section of Model.fit.
array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).
computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose: 0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. sample_weight: Optional Numpy array of weights for the test samples,
used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples
 (1:1 mapping between weights and samples), or in the case of
temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.
before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.
callbacks to apply during evaluation. See [callbacks](/api_docs/python/tf/keras/callbacks).
input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
only. Maximum number of processes to spin up when using processbased threading. If unspecified, workers will default to 1.
keras.utils.Sequence input only. If True, use processbased threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass nonpicklable arguments to the generator as they can’t be passed easily to children processes.
with each key being the name of the metric. If False, they are returned as a list.
**kwargs: Unused at this time.
See the discussion of Unpacking behavior for iteratorlike inputs for Model.fit.
Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
RuntimeError: If model.evaluate is wrapped in a tf.function.
Evaluates the model on a data generator.
Model.evaluate now supports generators, so there is no longer any need to use this endpoint.
Trains the model for a fixed number of epochs (iterations on a dataset).
A Numpy array (or arraylike), or a list of arrays (in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).
A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).
A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the perreplica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.
it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).
Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.
0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).
List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batchlevel calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epochlevel calls instead with an appropriate steps_per_epoch value.
Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or
validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.
the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:
A tuple (x_val, y_val) of Numpy arrays or tensors.
A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.
A tf.data.Dataset.
A Python generator or keras.utils.Sequence returning
(inputs, targets) or (inputs, targets, sample_weights).
validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.
before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batchsized chunks. Has no effect when steps_per_epoch is not None.
to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an underrepresented class.
the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or
as the third element of x.
Epoch at which to start training (useful for resuming a previous training run).
Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:
steps_per_epoch=None is not supported.
is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.
Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.
input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
only. Maximum number of processes to spin up when using processbased threading. If unspecified, workers will default to 1.
keras.utils.Sequence input only. If True, use processbased threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass nonpicklable arguments to the generator as they can’t be passed easily to children processes.
A common pattern is to pass a tf.data.Dataset, generator, or
tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iteratorlikes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the toplevel tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.
A notable unsupported data type is the namedtuple. The reason is that
it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:
namedtuple(“example_tuple”, [“y”, “x”])
it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:
namedtuple(“other_tuple”, [“x”, “y”, “z”])
where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)
A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).
RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.
and what the model expects or when the input data is empty.
Fits the model on data yielded batchbybatch by a Python generator.
Model.fit now supports generators, so there is no longer any need to use this endpoint.
Creates a layer from its config.
This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).
output of get_config.
A layer instance.
Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.
Python dictionary.
Retrieves a layer based on either its name (unique) or index.
If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottomup).
name: String, name of layer. index: Integer, index of layer.
A layer instance.
Loads all layer weights, either from a TensorFlow or an HDF5 weight file.
If by_name is False weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.
If by_name is True, weights are loaded into layers only if they share the same name. This is useful for finetuning or transferlearning models where some of the layers have changed.
Only topological loading (by_name=False) is supported when loading weights from the TensorFlow format. Note that topological loading differs slightly between TensorFlow and HDF5 formats for userdefined classes inheriting from tf.keras.Model: HDF5 loads based on a flattened list of weights, while the TensorFlow format loads based on the objectlocal names of attributes to which layers are assigned in the Model’s constructor.
TensorFlow format, this is the file prefix (the same as was passed to save_weights). This can also be a path to a SavedModel saved from model.save.
order. Only topological loading is supported for weight files in TensorFlow format.
a mismatch in the number of weights, or a mismatch in the shape of the weight (only valid when by_name=True).
options for loading weights.
When loading a weight file in TensorFlow format, returns the same status object as tf.train.Checkpoint.restore. When graph building, restore ops are run automatically as soon as the network is built (on first call for userdefined classes inheriting from Model, immediately if it is already built).
When loading weights in HDF5 format, returns None.
format.
False.
Creates a function that executes one step of inference.
This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.
Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.
This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.
function if available.
Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.
Creates a function that executes one step of evaluation.
This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.
Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.
This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.
function if available.
Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.
Creates a function that executes one step of training.
This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.
Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.
This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.
function if available.
Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.
Returns the model’s metrics added using compile(), add_metric() APIs.
Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.
Examples:
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
... inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
... tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
Returns the model’s display labels for all outputs.
Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.
Examples:
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
... inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
List of all nontrainable weights tracked by this layer.
Nontrainable weights are not updated during training. They are expected to be updated manually in call().
A list of nontrainable variables.
Generates output predictions for the input samples.
Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.
For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.
Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whatsthedifferencebetweenmodelmethodspredictandcall) for more details about the difference between Model methods predict() and __call__().
A Numpy array (or arraylike), or a list of arrays (in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A tf.data dataset.
A generator or keras.utils.Sequence instance.
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iteratorlike inputs section of Model.fit.
Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose: Verbosity mode, 0 or 1. steps: Total number of steps (batches of samples)
before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.
List of callbacks to apply during prediction. See [callbacks](/api_docs/python/tf/keras/callbacks).
input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
only. Maximum number of processes to spin up when using processbased threading. If unspecified, workers will default to 1.
keras.utils.Sequence input only. If True, use processbased threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass nonpicklable arguments to the generator as they can’t be passed easily to children processes.
See the discussion of Unpacking behavior for iteratorlike inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.
Numpy array(s) of predictions.
RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided
input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.
Generates predictions for the input samples from a data generator.
Model.predict now supports generators, so there is no longer any need to use this endpoint.
Returns predictions for a single batch of samples.
model has multiple inputs).
multiple inputs).
Numpy array(s) of predictions.
RuntimeError: If model.predict_on_batch is wrapped in a tf.function.
The logic for one inference step.
This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.
This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.
data: A nested structure of `Tensor`s.
The result of one inference step, typically the output of calling the Model on data.
Resets the state of all the metrics in the model.
Examples:
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
Settable attribute indicating whether the model should run eagerly.
Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.
By default, we will attempt to compile your model to a static graph to deliver the best execution performance.
Boolean, whether the model should run eagerly.
Saves the model to Tensorflow SavedModel or a single HDF5 file.
Please see tf.keras.models.save_model or the [Serialization and Saving guide](https://keras.io/guides/serialization_and_saving/) for details.
model.
target location, or provide the user with a manual prompt.
include_optimizer: If True, save optimizer’s state together. save_format: Either ‘tf’ or ‘h5’, indicating whether to save the
model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.
‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.
tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.
SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.
Example:
```python from keras.models import load_model
model.save(‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’ del model # deletes the existing model
# returns a compiled model # identical to the previous one model = load_model(‘my_model.h5’) ```
Returns the tf.TensorSpec of call inputs as a tuple (args, kwargs).
This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:
```python model = tf.keras.Model(…)
@tf.function def serve(*args, **kwargs):
# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this example, is # an empty dict since functional models do not use keyword arguments. arg_specs, kwarg_specs = model.save_spec()
‘serving_default’: serve.get_concrete_function(*arg_specs, **kwarg_specs)
tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.
If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.
Saves all layer weights.
Either saves in HDF5 or in TensorFlow format based on the save_format argument.
(ordered names of model layers).
a list of strings (ordered names of weights tensor of the layer).
storing the weight value, named after the weight tensor.
When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For userdefined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.
While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.
The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints](https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.
When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.
target location, or provide the user with a manual prompt.
‘.keras’ will default to HDF5 if save_format is None. Otherwise None defaults to ‘tf’.
options for saving weights.
format.
Deprecated, do NOT use!
Returns the updates from all layers that are stateful.
This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.
A list of update ops.
Prints a string summary of the network.
(e.g. set this to adapt the display to different terminal window sizes).
in each line. If not provided, defaults to [.33, .55, .67, 1.].
It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.
If not provided, defaults to False.
If not provided, defaults to False.
ValueError: if summary() is called before the model is built.
Test the model on a single batch of samples.
model has multiple inputs).
multiple inputs).
the model has named inputs.
array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).
weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.
batch. If False, the metrics will be statefully accumulated across batches.
with each key being the name of the metric. If False, they are returned as a list.
Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
RuntimeError: If model.test_on_batch is wrapped in a tf.function.
The logic for one evaluation step.
This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.
This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.
data: A nested structure of `Tensor`s.
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.
Returns a JSON string containing the network configuration.
To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).
to be passed to json.dumps().
A JSON string.
Returns a yaml string containing the network configuration.
Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.
To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).
custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.
to be passed to yaml.dump().
A YAML string.
RuntimeError: announces that the method poses a security risk
Runs a single gradient update on a single batch of data.
(in case the model has multiple inputs).
(in case the model has multiple inputs).
if the model has named inputs.
array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).
weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.
weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an underrepresented class.
batch. If False, the metrics will be statefully accumulated across batches.
with each key being the name of the metric. If False, they are returned as a list.
Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
RuntimeError: If model.train_on_batch is wrapped in a tf.function.
The logic for one training step.
This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happends in fit](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.
This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
data: A nested structure of `Tensor`s.
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
Version
Bases: packaging.version._BaseVersion
Input() is used to instantiate a Keras tensor.
A Keras tensor is a symbolic tensorlike object, which we augment with certain attributes that allow us to build a Keras model just by knowing the inputs and outputs of the model.
For instance, if a, b and c are Keras tensors, it becomes possible to do: model = Model(input=[a, b], output=c)
For instance, shape=(32,) indicates that the expected input will be batches of 32dimensional vectors. Elements of this tuple can be None; ‘None’ elements represent dimensions where the shape is not known.
batch_size: optional static batch size (integer). name: An optional name string for the layer.
Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn’t provided.
(float32, float64, int32…)
sparse. Only one of ‘ragged’ and ‘sparse’ can be True. Note that, if sparse is False, sparse tensors can still be passed into the input  they will be densified with a default value of 0.
If set, the layer will use the tf.TypeSpec of this tensor rather than creating a new placeholder tensor.
ragged. Only one of ‘ragged’ and ‘sparse’ can be True. In this case, values of ‘None’ in the ‘shape’ argument represent ragged dimensions. For more information about RaggedTensors, see [this guide](https://www.tensorflow.org/guide/ragged_tensors).
When provided, all other args except name must be None.
batch_input_shape.
A tensor.
Example:
`python
# this is a logistic regression in Keras
x = Input(shape=(32,))
y = Dense(16, activation='softmax')(x)
model = Model(x, y)
`
Note that even if eager execution is enabled, Input produces a symbolic tensorlike object (i.e. a placeholder). This symbolic tensorlike object can be used with lowerlevel TensorFlow ops that take tensors as inputs, as such:
`python
x = Input(shape=(32,))
y = tf.square(x) # This op will be treated like a layer
model = Model(x, y)
`
(This behavior does not work for higherorder TensorFlow APIs such as control flow and being directly watched by a tf.GradientTape).
However, the resulting model will not track any variables that were used as inputs to TensorFlow ops. All variable usages must happen within Keras layers to make sure they will be tracked by the model’s weights.
The Keras Input can also create a placeholder from an arbitrary tf.TypeSpec, e.g:
```python x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None],
dtype=tf.float32, ragged_rank=1))
y = x.values model = Model(x, y) ``` When passing an arbitrary tf.TypeSpec, it must represent the signature of an entire batch instead of just one example.
ValueError: If both sparse and ragged are provided. ValueError: If both shape and (batch_input_shape or batch_shape) are
provided.
ValueError: If shape, tensor and type_spec are None. ValueError: If arguments besides type_spec are nonNone while type_spec
is passed.
ValueError: if any unrecognized parameters are provided.
Decorator replaces custom skip test markup in doctests.
Say a function has a docstring:
>>> something # skip if not HAVE_AMODULE
>>> something + else
>>> something # skip if HAVE_BMODULE
This decorator will evaluate the expresssion after skip if
. If this
evaluates to True, then the comment is replaced by # doctest: +SKIP
.
If False, then the comment is just removed. The expression is evaluated in
the globals
scope of func.
For example, if the module global HAVE_AMODULE
is False, and module
global HAVE_BMODULE
is False, the returned function will have
docstring:
>>> something
>>> something + else
>>> something
Get indices where the bvalue is bval
Array containing the bvalues
bvalue to extract indices
The tolerated gap between the bvalues to extract and the actual bvalues.
Provide full paths to example or test datasets.
the filename/s of which dataset to return, one of:
‘small_64D’ small region of interest nifti,bvecs,bvals 64 directions
‘small_101D’ small region of interest nifti, bvecs, bvals 101 directions
‘aniso_vox’ volume with anisotropic voxel size as Nifti
‘fornix’ 300 tracks in Trackvis format (from Pittsburgh Brain Competition)
‘gqi_vectors’ the scanner wave vectors needed for a GQI acquisitions of 101 directions tested on Siemens 3T Trio
‘small_25’ small ROI (10x8x2) DTI data (b value 2000, 25 directions)
‘test_piesno’ slice of N=8, K=14 diffusion data
‘reg_c’ small 2D image used for validating registration
‘reg_o’ small 2D image used for validation registration
‘cb_2’ two vectorized cingulum bundles
filenames for dataset
Examples
>>> import numpy as np
>>> from dipy.io.image import load_nifti
>>> from dipy.data import get_fnames
>>> fimg, fbvals, fbvecs = get_fnames('small_101D')
>>> bvals=np.loadtxt(fbvals)
>>> bvecs=np.loadtxt(fbvecs).T
>>> data, affine = load_nifti(fimg)
>>> data.shape == (6, 10, 10, 102)
True
>>> bvals.shape == (102,)
True
>>> bvecs.shape == (102, 3)
True
provide triangulated spheres
which sphere  one of: * ‘symmetric362’ * ‘symmetric642’ * ‘symmetric724’ * ‘repulsion724’ * ‘repulsion100’ * ‘repulsion200’
Examples
>>> import numpy as np
>>> from dipy.data import get_sphere
>>> sphere = get_sphere('symmetric362')
>>> verts, faces = sphere.vertices, sphere.faces
>>> verts.shape == (362, 3)
True
>>> faces.shape == (720, 3)
True
>>> verts, faces = get_sphere('not a sphere name')
Traceback (most recent call last):
...
DataError: No sphere called "not a sphere name"
Return packagelike thing and module setup for package name
package name
message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.
TripWire
instanceIf we can import the package, return it. Otherwise return an object raising an error when accessed
True if import for package was successful, false otherwise
callable usually set as setup_module
in calling namespace, to allow
skipping tests.
Examples
Typical use would be something like this at the top of a module using an optional package:
>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')
Of course in this case the package doesn’t exist, and so, in the module:
>>> have_pkg
False
and
>>> pkg.some_function()
Traceback (most recent call last):
...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError
If the module does exist  we get the module
>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True
Or a submodule if that’s what we asked for
>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True
Spherical function to spherical harmonics (SH).
Values of a function on the given sphere
.
The points on which the sf is defined.
Maximum SH order in the SH fit. For sh_order
, there will be
(sh_order + 1) * (sh_order + 2) / 2
SH coefficients for a symmetric
basis and (sh_order + 1) * (sh_order + 1)
coefficients for a full
SH basis.
None
for the default DIPY basis,
tournier07
for the Tournier 2007 [R35636a4a5d662]_[R35636a4a5d663]_ basis,
descoteaux07
for the Descoteaux 2007 [1] basis,
(None
defaults to descoteaux07
).
True for using a SH basis containing even and odd order SH functions. False for using a SH basis consisting only of even order SH functions.
True to use a legacy basis definition for backward compatibility
with previous tournier07
and descoteaux07
implementations.
Lambdaregularization in the SH fit.
SH coefficients representing the input function.
References
Descoteaux, M., Angelino, E., Fitzgibbons, S. and Deriche, R. Regularized, Fast, and Robust Analytical Qball Imaging. Magn. Reson. Med. 2007;58:497510.
Tournier J.D., Calamante F. and Connelly A. Robust determination of the fibre orientation distribution in diffusion MRI: Nonnegativity constrained superresolved spherical deconvolution. NeuroImage. 2007;35(4):14591472.
Tournier JD, Smith R, Raffelt D, Tabbara R, Dhollander T, Pietsch M, et al. MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage. 2019 Nov 15;202:116137.
Spherical harmonics (SH) to spherical function (SF).
SH coefficients representing a spherical function.
The points on which to sample the spherical function.
Maximum SH order in the SH fit. For sh_order
, there will be
(sh_order + 1) * (sh_order + 2) / 2
SH coefficients for a symmetric
basis and (sh_order + 1) * (sh_order + 1)
coefficients for a full
SH basis.
None
for the default DIPY basis,
tournier07
for the Tournier 2007 [R30944dc1667c2]_[R30944dc1667c3]_ basis,
descoteaux07
for the Descoteaux 2007 [1] basis,
(None
defaults to descoteaux07
).
True to use a SH basis containing even and odd order SH functions. Else, use a SH basis consisting only of even order SH functions.
True to use a legacy basis definition for backward compatibility
with previous tournier07
and descoteaux07
implementations.
Spherical function values on the sphere
.
References
Descoteaux, M., Angelino, E., Fitzgibbons, S. and Deriche, R. Regularized, Fast, and Robust Analytical Qball Imaging. Magn. Reson. Med. 2007;58:497510.
Tournier J.D., Calamante F. and Connelly A. Robust determination of the fibre orientation distribution in diffusion MRI: Nonnegativity constrained superresolved spherical deconvolution. NeuroImage. 2007;35(4):14591472.
Tournier JD, Smith R, Raffelt D, Tabbara R, Dhollander T, Pietsch M, et al. MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage. 2019 Nov 15;202:116137.
Returns the degree (m
) and order (n
) of all the symmetric spherical
harmonics of degree less then or equal to sh_order
. The results,
m_list
and n_list
are kx1 arrays, where k depends on sh_order
.
They can be passed to real_sh_descoteaux_from_index()
and
:func:real_sh_tournier_from_index
.
even int > 0, max order to return
True for SH basis with even and odd order terms
degrees of even spherical harmonics
orders of even spherical harmonics
See also
shm.real_sh_descoteaux_from_index
, shm.real_sh_tournier_from_index
This function gives the unique rounded bvalues of the data
Array containing the bvalues
The order of magnitude that the bvalues have to differ to be considered an unique bvalue. Bvalues are also rounded up to this order of magnitude. Default: derive this value from the maximal bvalue provided: \(bmag=log_{10}(max(bvals))  1\).
If True function also returns all individual rounded bvalues. Default: False
Array containing the rounded unique bvalues
MultipleLayerPercepton
Bases: object
Methods

Evaluate the model on test dataset. 

Train the model on train dataset. 

Predict the output from input samples. 

Get the summary of the model. 
Multiple Layer Perceptron with Dropout.
Shape of data to be trained
List of number of nodes in hidden layers
Activation function used in hidden layer
Dropout ratio
Number of nodes in output layer
Activation function used in output layer
Select optimizer. Default adam.
Select loss function for measuring accuracy. Default sparse_categorical_crossentropy.
Evaluate the model on test dataset.
The evaluate method will evaluate the model on a test dataset.
the x_test is the test dataset
the y_test is the labels of the test dataset
By setting verbose 0, 1 or 2 you just say how do you want to ‘see’ the training progress for each epoch.
return list of loss value and accuracy value on test dataset
Train the model on train dataset.
The fit method will train the model for a fixed number of epochs (iterations) on a dataset.
the x_train is the train dataset
the y_train is the labels of the train dataset
the number of epochs
A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs
SingleLayerPerceptron
Bases: object
Methods

Evaluate the model on test dataset. 

Train the model on train dataset. 

Predict the output from input samples. 

Get the summary of the model. 
Single Layer Perceptron with Dropout.
Shape of data to be trained
Number of nodes in hidden layer
Activation function used in hidden layer
Dropout ratio
Number of nodes in output layer
Activation function used in output layer
Select optimizer. Default adam.
Select loss function for measuring accuracy. Default sparse_categorical_crossentropy.
Evaluate the model on test dataset.
The evaluate method will evaluate the model on a test dataset.
the x_test is the test dataset
the y_test is the labels of the test dataset
By setting verbose 0, 1 or 2 you just say how do you want to ‘see’ the training progress for each epoch.
return list of loss value and accuracy value on test dataset
Train the model on train dataset.
The fit method will train the model for a fixed number of epochs (iterations) on a dataset.
the x_train is the train dataset
the y_train is the labels of the train dataset
the number of epochs
A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs
Version
Bases: packaging.version._BaseVersion
Return packagelike thing and module setup for package name
package name
message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.
TripWire
instanceIf we can import the package, return it. Otherwise return an object raising an error when accessed
True if import for package was successful, false otherwise
callable usually set as setup_module
in calling namespace, to allow
skipping tests.
Examples
Typical use would be something like this at the top of a module using an optional package:
>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')
Of course in this case the package doesn’t exist, and so, in the module:
>>> have_pkg
False
and
>>> pkg.some_function()
Traceback (most recent call last):
...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError
If the module does exist  we get the module
>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True
Or a submodule if that’s what we asked for
>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True