Cells¶
Introduction¶
Cell objects are the atomics elements that compose a deep neural network.
They are the node of the computation graph. n2d2.cells.NeuralNetworkCell
are not dependant of a DeepNet this allow a dynamic management of the computation.
- Cells are organize with the following logic :
n2d2.cells.NeuralNetworkCell
: Atomic cell of a neural network;n2d2.cells.Block
: Store a collection ofn2d2.cells.NeuralNetworkCell
, the storage order does not determine the graph computation;n2d2.cells.DeepNetCell
: This cell allow you to use anN2D2.DeepNet
, it can be used for ONNX and INI import or to run optimize learning;n2d2.cells.Iterable
: Similar ton2d2.cells.Block
but the order of storage determine the computation graph;n2d2.cells.Sequence
: A vertical structure to create neural network;n2d2.cells.Layer
: An horizontal structure to create neural network.
Block¶
- class n2d2.cells.Block(cells: List[Cell], name: str | None = None)¶
The Block class is the most general type of cell container, from which all other containers are derived. It saves its cells internally with a dictionary. The Block class has no implicit structure for propagation, the __call__ method therefore has to be defined explicitly.
- __init__(cells: List[Cell], name: str | None = None)¶
- get_cell(item: str)¶
Returns the low level view of a cell.
- get_cells()¶
Returns dictionary with all cells inside the current Block.
- is_integral()¶
Check if the parameters of every cell have an integral precision.
- set_back_propagate(value)¶
Set back_propagate boolean of trainable cells. :param value: If True trainable cell will enable back propagation. :type value: bool
- set_solver(solver: Solver)¶
Set a solver for every optimizable parameters in this Block. Optimizable parameters are weights, biases and quantizers.
- Parameters:
solver (
n2d2.solver.Solver
, optional) – Solver to use for every optimizable parameters, default=n2d2.solver.SGD
- to_deepnet_cell(provider: DataProvider, target: Target = None)¶
Convert a
n2d2.cells.Block
to an2d2.cells.DeepNetCell
- Parameters:
provider (
n2d2.provider.DataProvider
) – Data provider used by the neural networktarget (
n2d2.target.Target
) – Target object
- Returns:
The corresponding
n2d2.cells.DeepNetCell
- Return type:
Sequence¶
- class n2d2.cells.Sequence(cells: List[Cell], name: str | None = None)¶
This implementation of the Iterable class describes a sequential (vertical) ordering of cells.
- __init__(cells: List[Cell], name: str | None = None)¶
- append(cell: Cell)¶
Append a cell at the end of the sequence.
- get_cell(item: str)¶
Returns the low level view of a cell.
- get_cells()¶
Returns dictionary with all cells inside the current Block.
- is_integral()¶
Check if the parameters of every cell have an integral precision.
- set_back_propagate(value)¶
Set back_propagate boolean of trainable cells. :param value: If True trainable cell will enable back propagation. :type value: bool
- set_solver(solver: Solver)¶
Set a solver for every optimizable parameters in this Block. Optimizable parameters are weights, biases and quantizers.
- Parameters:
solver (
n2d2.solver.Solver
, optional) – Solver to use for every optimizable parameters, default=n2d2.solver.SGD
- to_deepnet_cell(provider: DataProvider, target: Target = None)¶
Convert a
n2d2.cells.Block
to an2d2.cells.DeepNetCell
- Parameters:
provider (
n2d2.provider.DataProvider
) – Data provider used by the neural networktarget (
n2d2.target.Target
) – Target object
- Returns:
The corresponding
n2d2.cells.DeepNetCell
- Return type:
Layer¶
- class n2d2.cells.Layer(cells: list, mapping: list = None, name: str = None)¶
This implementation of the Iterable class describes a layered (horizontal) ordering of cells. An optional mapping can be given to define connectivity with preceding input cell
- __init__(cells: list, mapping: list = None, name: str = None)¶
- append(cell: Cell)¶
Append a cell at the end of the sequence.
- get_cell(item: str)¶
Returns the low level view of a cell.
- get_cells()¶
Returns dictionary with all cells inside the current Block.
- is_integral()¶
Check if the parameters of every cell have an integral precision.
- set_back_propagate(value)¶
Set back_propagate boolean of trainable cells. :param value: If True trainable cell will enable back propagation. :type value: bool
- set_solver(solver: Solver)¶
Set a solver for every optimizable parameters in this Block. Optimizable parameters are weights, biases and quantizers.
- Parameters:
solver (
n2d2.solver.Solver
, optional) – Solver to use for every optimizable parameters, default=n2d2.solver.SGD
- to_deepnet_cell(provider: DataProvider, target: Target = None)¶
Convert a
n2d2.cells.Block
to an2d2.cells.DeepNetCell
- Parameters:
provider (
n2d2.provider.DataProvider
) – Data provider used by the neural networktarget (
n2d2.target.Target
) – Target object
- Returns:
The corresponding
n2d2.cells.DeepNetCell
- Return type:
DeepNetCell¶
The n2d2.cells.DeepNetCell
constructor require an N2D2.DeepNet
. In practice, you will not use the constructor directly.
There are three methods to generate a n2d2.cells.DeepNetCell
: n2d2.cells.DeepNetCell.load_from_ONNX()
, n2d2.cells.DeepNetCell.load_from_INI()
, n2d2.cells.Sequence.to_deepnet_cell()
The DeepNetCell can be used to train the neural network in an efficient way thanks to n2d2.cells.DeepNetCell.fit()
.
- class n2d2.cells.DeepNetCell(N2D2_object)¶
n2d2 wrapper for a N2D2 deepnet object. Allows chaining a N2D2 deepnet (for example loaded from a ONNX or INI file) into the dynamic computation graph of the n2d2 API. During each use of the the
__call__
method, the N2D2 deepnet is converted to a n2d2 representation and the N2D2 deepnet is concatenated to the deepnet of the incoming tensor object. The object is manipulated with the bound methods of the N2D2 DeepNet object, and its computation graph is also exclusively defined by the DeepNet object that is passed to it during construction. It therefore only inherits from Block, and not from the Iterable class and its children, which are reserved for the python APIs implicit way of constructing graphs.- __init__(N2D2_object)¶
As a user, you should not use this method, if you want to create a DeepNetCell object, please use :
n2d2.cells.DeepNetCell.load_from_ONNX()
,n2d2.cells.DeepNetCell.load_from_INI()
,n2d2.cells.Sequence.to_deepnet_cell()
- Parameters:
N2D2_object (
N2D2.DeepNet
) – The N2D2 DeepNet object
- export_free_parameters(dir_name: str, verbose: bool = True)¶
Export deepnet parameters.
- fit(learn_epoch: int, log_epoch: int = 1000, avg_window: int = 10000, bench: bool = False, ban_multi_device: bool = False, valid_metric: str = 'Sensitivity', stop_valid: int = 0, log_kernels: bool = False)¶
Train the
n2d2.cells.DeepNetCell
object.- Parameters:
learn_epoch (int) – The number of epochs steps
log_epoch (int, optional) – The number of epochs between logs, default=1000
avg_window (int, optional) – The average window to compute success rate during learning, default=10000
bench (bool, optional) – If
True
, activate the benchmarking of the learning speed , default=Falsevalid_metric (str, optional) – Validation metric to use can be
Sensitivity
,Specificity
,Precision
,NegativePredictiveValue
,MissRate
,FallOut
,FalseDiscoveryRate
,FalseOmissionRate
,Accuracy
,F1Score
,Informedness
,Markedness
, default=”Sensitivity”stop_valid (int, optional) – The maximum number of successive lower score validation, default=0
log_kernels (bool, optional) – If
True
, log kernels after learning, default=False
- get_cell(item: str)¶
Returns the low level view of a cell.
- get_cells()¶
Returns dictionary with all cells inside the current Block.
- get_deepnet()¶
Get the
n2d2.deepnet.DeepNet
used for computation.
- get_embedded_deepnet()¶
Get the
n2d2.deepnet.DeepNet
used to define this cell.
- get_input_cells()¶
Returns the cells located at the entry of the network.
- get_output_cells()¶
Returns the cells located at the end of the network.
- Returns:
Return a list of cells located at the end of the network
- Return type:
list
- import_free_parameters(dir_name: str, ignore_not_exists: bool = False) None ¶
Import deepnet parameters.
- is_integral()¶
Check if the parameters of every cell have an integral precision.
- learn()¶
Set the network to
learn
mode.
- classmethod load_from_INI(path: str)¶
Load a deepnet from an INI file.
- Parameters:
model_path (str) – Path to the
ini
file.
- classmethod load_from_ONNX(provider: DataProvider, model_path: str, ini_file: str = None, ignore_cells: list = None, ignore_input_size: bool = False)¶
Load a deepnet from an ONNX file given a provider object.
- Parameters:
provider (
n2d2.provider.DataProvider
) – Provider object to base deepnet uponmodel_path (str) – Path to the
onnx
model.ini_file (str) – Path to an optional
.ini
file with additional onnx import instructionsignore_cells (list, optional) – List of cells name to ignore, default=None
ignore_input_size (bool, optional) – if
True
, the input size specified in the ONNXmodle is ignored and then2d2.provider.Provider
size is used, default=False
- log_confusion_matrix(file_name: str, partition: str = 'Test') None ¶
Log the confusion matrix of the previous inference done on a data partition.
- Parameters:
file_name (str) – File name of the confusion matrix, it will be saved in <TargetName>.Target/ConfusionMatrix_<file_name>_score.png.
partition (str, otpionnal) – The partition can be
Learn
,Validation
,Test
,Unpartitioned
, default=”Test”
- log_stats(path: str) None ¶
Export statistics of the graph
- Parameters:
dirname (str) – path to the directory where you want to save the data.
- log_success(path: str, partition: str = 'Test') None ¶
Save a graph of the loss and the validation score as a function of the step number.
- Parameters:
path (str) – Path to the directory where you want to save the data.
partition (str, otpionnal) – The partition can be
Learn
,Validation
,Test
,Unpartitioned
, default=”Test”
- remove(name: str, reconnect: bool = True) None ¶
Remove a cell from the encapsulated deepnet. :param name: Name of cell that shall be removed. :type name: str :param reconnect: If
True
, reconnects the parents with the child of the removed cell, default=True :type reconnect: bool, optional
- run_test(log: int = 1000, report: int = 100, nb_test: int = -1, test_index: int = -1, test_id: int = -1, qat_sat: bool = False, log_kernels: bool = False, wt_round_mode: str = 'NONE', b_round_mode: str = 'NONE', c_round_mode: str = 'NONE', act_scaling_mode: str = 'FLOAT_MULT', log_JSON: bool = False, log_outputs: int = 0)¶
Test the
n2d2.cells.DeepNetCell
object. This method will also log the confusion matrix and the success curve.- Parameters:
log (int, optional) – The number of steps between logs, default=1000
report (int, optional) – Number of steps between reportings, default=100
nb_test (int, optional) – number of stimuli to use for test , default=-1
test_index (int, optional) – Test a single specific stimulus index in the Test set, default=-1
test_id (int, optional) – Test a single specific stimulus ID (takes precedence over test_index), default=-1
qat_sat (bool, optional) – Fuse a QAT trained model with the SAT method, default=False
log_kernels (bool, optional) – Log kernels after learning, default=False
wt_round_mode (str, optional) – Weights clipping mode on export, can be
NONE
,``RINTF``, default=”NONE”b_round_mode (str, optional) – Biases clipping mode on export, can be
NONE
,``RINTF``, default=”NONE”c_round_mode (str, optional) – Clip clipping mode on export, can be
NONE
,``RINTF``, default=”NONE”act_scaling_mode (str, optional) – activation scaling mode on export, can be
NONE
,FLOAT_MULT
,FIXED_MULT16
,SINGLE_SHIFT
orDOUBLE_SHIFT
, default=”FLOAT_MULT”log_JSON (bool, optional) – If
True
, log JSON annotations, default=Falselog_outputs (int, optional) – log layers outputs for the n-th stimulus (0 = no log), default=0
- set_back_propagate(value)¶
Set back_propagate boolean of trainable cells. :param value: If True trainable cell will enable back propagation. :type value: bool
- set_provider(provider: Provider) None ¶
Set a data provider to the deepnetcell
- Parameters:
provider (Provider) – Data provider to use.
- set_solver(solver: Solver)¶
Set a solver for every optimizable parameters in this Block. Optimizable parameters are weights, biases and quantizers.
- Parameters:
solver (
n2d2.solver.Solver
, optional) – Solver to use for every optimizable parameters, default=n2d2.solver.SGD
- summary(verbose: bool = False)¶
This method synthesize current deepnet’s layers in a table.
- Parameters:
verbose (bool) – display implicit layers like BN
- test()¶
Set the network to
test
mode.
- to_deepnet_cell(provider: DataProvider, target: Target = None)¶
Convert a
n2d2.cells.Block
to an2d2.cells.DeepNetCell
- Parameters:
provider (
n2d2.provider.DataProvider
) – Data provider used by the neural networktarget (
n2d2.target.Target
) – Target object
- Returns:
The corresponding
n2d2.cells.DeepNetCell
- Return type:
- update()¶
Update learnable parameters
Example¶
You can create a DeepNet cell with n2d2.cells.DeepNetCell.load_from_ONNX()
:
database = n2d2.database.MNIST(data_path=DATA_PATH, validation=0.1)
provider = n2d2.provider.DataProvider(database, [28, 28, 1], batch_size=BATCH_SIZE)
model = n2d2.cells.DeepNetCell.load_from_ONNX(provider, ONNX_PATH)
model.fit(nb_epochs)
model.run_test()
Using n2d2.cells.DeepNetCell.fit()
method will reduce the learning time as it will parallelize the loading of the batch of data and the propagation.
If you want to use the dynamic computation graph provided by the API, you can use the n2d2.cells.DeepNetCell
as a simple cell.
database = n2d2.database.MNIST(data_path=DATA_PATH, validation=0.1)
provider = n2d2.provider.DataProvider(database, [28, 28, 1], batch_size=BATCH_SIZE)
model = n2d2.cells.DeepNetCell.load_from_ONNX(provider, ONNX_PATH)
sequence = n2d2.cells.Sequence([model, n2d2.cells.Softmax(with_loss=True)])
input_tensor = n2d2.Tensor(DIMS)
output_tensor = sequence(input_tensor)
Cells¶
NeuralNetworkCell¶
- class n2d2.cells.NeuralNetworkCell(**config_parameters)¶
Abstract class for layer implementation.
- N2D2()¶
Return the N2D2 object.
- abstract __init__(**config_parameters)¶
- Parameters:
name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= None
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Conv¶
- class n2d2.cells.Conv(nb_inputs: int, nb_outputs: int, kernel_dims: list, nb_input_cells: int = 1, **config_parameters)¶
Convolutional layer.
- N2D2()¶
Return the N2D2 object.
- __init__(nb_inputs: int, nb_outputs: int, kernel_dims: list, nb_input_cells: int = 1, **config_parameters)¶
- Parameters:
nb_inputs (int) – Number of input channels.
nb_outputs (int) – Number of output channels
kernel_dims (list) – Kernel dimension with the format [Height, Width].
nb_input_cells (int, optional) – Number of cell who are an input of this cell, default=1
sub_sample_dims (list, optional) – Dimension of the subsampling factor of the output feature maps, default= [1, 1]
stride_dims (list, optional) – Dimension of the stride of the kernel, default= [1, 1]
padding_dims (list, optional) – Dimensions of the padding, default= [0, 0]
dilation_dims (list, optional) – Dimensions of the dilation of the kernels, default= [1, 1]
mapping (
Tensor
) – Mappingfiller (
n2d2.filler.Filler
, optional) – Set the weights and bias filler, this parameter override parametersweights_filler
andbias_filler
, default=n2d2.filler.NormalFiller
weights_filler (
n2d2.filler.Filler
, optional) – Weights initial values filler, default=n2d2.filler.Normal
bias_filler (
n2d2.filler.Filler
, optional) – Biases initial values filler, default=n2d2.filler.Normal
solver (
n2d2.solver.Solver
, optional) – Set the weights and bias solver, this parameter override parametersweights_solver
andbias_solver
, default=n2d2.solver.SGD
weights_solver (
n2d2.solver.Solver
, optional) – Solver for weightsbias_solver (
n2d2.solver.Solver
, optional) – Solver for biasesno_bias (bool, optional) – If
True
, don’t use bias, default=Falseweights_export_flip (bool, optional) – If
True
, import/export flipped kernels, default=Falseback_propagate (bool, optional) – If
True
, enable backpropagation, default=Truename (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_bias(output_index)¶
- Parameters:
output_index (int) –
- Returns:
list of biases
- Return type:
list
- get_biases()¶
- Returns:
list of biases
- Return type:
list
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- get_weight(output_index, channel_index)¶
- Parameters:
output_index (int) –
channel_index (int) –
- get_weights()¶
- Returns:
list of weights
- Return type:
list
- has_quantizer()¶
- Returns:
True if the cell use a quantizer
- Return type:
bool
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- refill_bias()¶
Re-fill the bias using the associated bias filler
- refill_weights()¶
Re-fill the weights using the associated weights filler
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
- set_bias(output_index, value)¶
- Parameters:
output_index (int) –
value (
Tensor
) –
- set_bias_filler(filler, refill=False)¶
Set a filler for the bias.
- Parameters:
filler (
n2d2.filler.Filler
) – Filler object
- set_filler(filler, refill=False)¶
Set a filler for the weights and bias.
- Parameters:
filler (
n2d2.filler.Filler
) – Filler object
- set_solver(solver)¶
“Set the weights and bias solver with the same solver.
- Parameters:
solver (
n2d2.solver.Solver
) – Solver object
- set_solver_parameter(key, value)¶
Set the parameter
key
with the valuevalue
for the attribute weight and bias solver.- Parameters:
key (str) – Parameter name
value (Any) – The value of the parameter
- set_weight(output_index, channel_index, value)¶
- Parameters:
output_index –
channel_index –
value (
Tensor
) –
- set_weights_filler(filler, refill=False)¶
Set a filler for the weights.
- Parameters:
filler (
n2d2.filler.Filler
) – Filler object
Deconv¶
- class n2d2.cells.Deconv(nb_inputs, nb_outputs, kernel_dims, nb_input_cells=1, **config_parameters)¶
Deconvolution layer.
- N2D2()¶
Return the N2D2 object.
- __init__(nb_inputs, nb_outputs, kernel_dims, nb_input_cells=1, **config_parameters)¶
- Parameters:
nb_inputs (int) – Number of inputs of the cells.
nb_outputs (int) – Number of output channels
kernel_dims (list) – Kernel dimension.
nb_input_cells (int, optional) – Number of cell who are an input of this cell, default=1
stride_dims (list, optional) – Dimension of the stride of the kernel, default=[1, 1]
padding_dims (list, optional) – Dimensions of the padding, default=[0, 0]
dilation_dims (list, optional) – Dimensions of the dilation of the kernels, default=[1, 1]
filler (
n2d2.filler.Filler
, optional) – Set the weights and bias filler, this parameter override parametersweights_filler
andbias_filler
, default=n2d2.filler.NormalFiller
weights_filler (
n2d2.filler.Filler
, optional) – Weights initial values filler, default=n2d2.filler.NormalFiller
bias_filler (
n2d2.filler.Filler
, optional) – Biases initial values filler, default=n2d2.filler.NormalFiller
solver (
n2d2.solver.Solver
, optional) – Set the weights and bias solver, this parameter override parametersweights_solver
andbias_solver
, default=n2d2.solver.SGD
weights_solver (
n2d2.solver.Solver
, optional) – Solver for weights, default=n2d2.solver.SGD
bias_solver (
n2d2.solver.Solver
, optional) – Solver for biases, default=n2d2.solver.SGD
no_bias (bool, optional) – If
True
, don’t use bias, default=Falseback_propagate (bool, optional) – If
True
, enable backpropagation, default=Trueweights_export_flip (bool, optional) – If
True
, import/export flipped kernels, default=Falsemapping (
Tensor
, optional) – Mappingname (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_bias(output_index)¶
- Parameters:
output_index (int) –
- get_biases()¶
- Returns:
list of biases
- Return type:
list
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- get_weight(output_index, channel_index)¶
- Parameters:
output_index (int) –
channel_index (int) –
- get_weights()¶
- Returns:
list of weights
- Return type:
list
- has_quantizer()¶
- Returns:
True if the cell use a quantizer
- Return type:
bool
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- refill_bias()¶
Re-fill the bias using the associated bias filler
- refill_weights()¶
Re-fill the weights using the associated weights filler
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
- set_bias(output_index, value)¶
- Parameters:
output_index (int) –
value (
Tensor
) –
- set_filler(filler, refill=False)¶
Set a filler for the weights and bias.
- Parameters:
filler (
n2d2.filler.Filler
) – Filler object
- set_solver(solver)¶
“Set the weights and bias solver with the same solver.
- Parameters:
solver (
n2d2.solver.Solver
) – Solver object
- set_weight(output_index, channel_index, value)¶
- Parameters:
output_index –
channel_index –
value (
Tensor
) –
Fc¶
- class n2d2.cells.Fc(nb_inputs, nb_outputs, nb_input_cells=1, **config_parameters)¶
Fully connected layer.
- N2D2()¶
Return the N2D2 object.
- __init__(nb_inputs, nb_outputs, nb_input_cells=1, **config_parameters)¶
- Parameters:
nb_inputs (int) – Number of inputs of the cells.
nb_outputs (int) – Number of outputs of the cells.
nb_input_cells (int, optional) – Number of cell who are an input of this cell, default=1
solver (
Solver
, optional) – Set the weights and bias solver, this parameter override parametersweights_solver
andbias_solver
, default=SGD
weights_solver (
Solver
, optional) – Solver for weights, default=SGD
bias_solver (
Solver
, optional) – Solver for biases, default=Normal
filler (
Filler
, optional) – Set the weights and bias filler, this parameter override parametersweights_filler
andbias_filler
, default=NormalFiller
weights_filler (
Filler
, optional) – Weights initial values filler, default=Normal
bias_filler (
Filler
, optional) – Biases initial values filler, default=Normal
mapping (
Tensor
, optional) – Mapping, default=Noneno_bias (bool, optional) – If
True
, don’t use bias, default=Falsename (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_bias(output_index: int)¶
- Parameters:
output_index (int) –
- get_biases()¶
- Returns:
list of biases
- Return type:
list
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- get_weight(output_index, channel_index)¶
- Parameters:
output_index (int) –
channel_index (int) –
- get_weights()¶
- Returns:
list of weights
- Return type:
list
- has_bias()¶
- Returns:
True if the cell use bias
- Return type:
bool
- has_quantizer()¶
- Returns:
True if the cell use a quantizer
- Return type:
bool
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- refill_bias()¶
Re-fill the bias using the associated bias filler
- refill_weights()¶
Re-fill the weights using the associated weights filler
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
- set_bias(output_index: int, value: Tensor)¶
- Parameters:
output_index (int) –
value (
n2d2.Tensor
) –
- set_bias_filler(filler, refill=False)¶
Set a filler for the bias.
- Parameters:
filler (
Filler
) – Filler object
- set_filler(filler, refill=False)¶
Set a filler for the weights and bias.
- Parameters:
filler (
Filler
) – Filler object
- set_solver(solver: Solver)¶
“Set the weights and bias solver with the same solver.
- Parameters:
solver (
Solver
) – Solver object
- set_solver_parameter(key, value)¶
Set the parameter
key
with the valuevalue
for the attribute weight and bias solver.- Parameters:
key (str) – Parameter name
value (Any) – The value of the parameter
- set_weight(output_index, channel_index, value)¶
- Parameters:
output_index –
channel_index –
value (
Tensor
) –
- set_weights_filler(filler, refill=False)¶
Set a filler for the weights.
- Parameters:
filler (
Filler
) – Filler object
Dropout¶
- class n2d2.cells.Dropout(**config_parameters)¶
Dropout layer [SHK+12].
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
dropout (float, optional) – The probability with which the value from input would be dropped, default=0.5
name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
ElemWise¶
- class n2d2.cells.ElemWise(**config_parameters)¶
Element-wise operation layer.
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
operation (str, optional) – Type of operation (
Sum
,AbsSum
,EuclideanSum
,Prod
, orMax
), default=”Sum”mode (str, optional) – (
PerLayer
,PerInput
,PerChannel
), default=”PerLayer”weights (list, optional) – Weights for the
Sum
,AbsSum
, andEuclideanSum
operation, in the same order as the inputs, default=[1.0]shifts (list, optional) – Shifts for the
Sum
andEuclideanSum
operation, in the same order as the inputs, default=[0.0]name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Padding¶
- class n2d2.cells.Padding(top_pad, bot_pad, left_pad, right_pad, **config_parameters)¶
Padding layer allows to insert asymmetric padding for each layer axes.
- N2D2()¶
Return the N2D2 object.
- __init__(top_pad, bot_pad, left_pad, right_pad, **config_parameters)¶
- Parameters:
top_pad (int) – Size of the top padding (positive or negative)
bot_pad (int) – Size of the bottom padding (positive or negative)
left_pad (int) – Size of the left padding (positive or negative)
right_pad (int) – Size of the right padding (positive or negative)
name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Softmax¶
- class n2d2.cells.Softmax(**config_parameters)¶
Softmax layer.
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
with_loss (bool, optional) –
Softmax
followed with a multinomial logistic layer, default=Falsegroup_size (int, optional) –
Softmax
is applied on groups of outputs. The group size must be a divisor ofnb_outputs
parameter, default=0name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
BatchNorm2d¶
- class n2d2.cells.BatchNorm2d(nb_inputs, nb_input_cells=1, **config_parameters)¶
Batch Normalization layer [IS15].
- N2D2()¶
Return the N2D2 object.
- __init__(nb_inputs, nb_input_cells=1, **config_parameters)¶
- Parameters:
nb_inputs (int) – Number of intput neurons
nb_input_cells (int, optional) – Number of cell who are an input of this cell, default=1
solver (
n2d2.solver.Solver
, optional) – Set the scale and bias solver, this parameter override parametersscale_solver
and bias_solver``, default=n2d2.solver.SGD
scale_solver (
n2d2.solver.Solver
, optional) – Scale solver parameters, default=n2d2.solver.SGD
bias_solver (
n2d2.solver.Solver
, optional) – Bias solver parameters, default=n2d2.solver.SGD
epsilon (float, optional) – Epsilon value used in the batch normalization formula. If
0.0
, automatically choose the minimum possible value, default=0.0moving_average_momentum (float, optional) – Moving average rate: used for the moving average of batch-wise means and standard deviations during training.The closer to
1.0
, the more it will depend on the last batch.name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
- set_solver(solver)¶
“Set the
scale
andbias
solver with the same solver.- Parameters:
solver (
n2d2.solver.Solver
) – Solver object
- set_solver_parameter(key, value)¶
Set the parameter
key
with the valuevalue
for the attributescale
andbias
solver.- Parameters:
key (str) – Parameter name
value (Any) – The value of the parameter
Pool¶
- class n2d2.cells.Pool(pool_dims, **config_parameters)¶
Pooling layer.
- N2D2()¶
Return the N2D2 object.
- __init__(pool_dims, **config_parameters)¶
- Parameters:
pool_dims (list) – Pooling area dimensions
pooling (str, optional) – Type of pooling (
Max
orAverage
), default=”Max”stride_dims (list, optional) – Dimension of the stride of the kernel, default= [1, 1]
padding_dims (list, optional) – Dimensions of the padding, default= [0, 0]
mapping (
Tensor
, optional) – Mappingname (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Activation¶
- class n2d2.cells.Activation(activation, **config_parameters)¶
Activation layer which can apply any activation to a stimuli.
- N2D2()¶
Return the N2D2 object.
- __init__(activation, **config_parameters)¶
- Parameters:
name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Reshape¶
- class n2d2.cells.Reshape(dims, **config_parameters)¶
Reshape layer.
- N2D2()¶
Return the N2D2 object.
- __init__(dims, **config_parameters)¶
- Parameters:
dims (list) – dims of the new shape of the layer
name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Resize¶
- class n2d2.cells.Resize(outputs_width, outputs_height, resize_mode, **config_parameters)¶
Resize layer.
- N2D2()¶
Return the N2D2 object.
- __init__(outputs_width, outputs_height, resize_mode, **config_parameters)¶
- Parameters:
outputs_width (int) – outputs_width
outputs_height (int) – outputs_height
resize_mode (str) – Resize interpolation mode. Can be,
Bilinear
orBilinearTF
(TensorFlow implementation)align_corners (boolean, optional) – Corner alignement mode if
BilinearTF
is used as interpolation mode, default=Truename (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Scaling¶
- class n2d2.cells.Scaling(scaling, **config_parameters)¶
Scaling layer.
- N2D2()¶
Return the N2D2 object.
- __init__(scaling, **config_parameters)¶
- Parameters:
scaling (
n2d2.scaling.Scaling
) – Scaling objectname (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Transformation¶
- class n2d2.cells.Transformation(perm, **config_parameters)¶
- N2D2()¶
Return the N2D2 object.
- __init__(perm, **config_parameters)¶
- Parameters:
transformation (
n2d2.transform.Transformation
) – Transformation to applyname (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Transpose¶
- class n2d2.cells.Transpose(perm: list, **config_parameters)¶
Transpose layer.
- N2D2()¶
Return the N2D2 object.
- __init__(perm: list, **config_parameters)¶
- Parameters:
perm (list) – Permutation
name (str, optional) – Cell name, default =
CellType_id
activation (
n2d2.activation.ActivationFunction
, optional) – Activation function, default= Nonedatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_diffinputs()¶
- Returns:
The gradient given to the cell.
- Return type:
Tensor
- get_diffoutputs(index: int = 0) Tensor ¶
- Parameters:
index (int, optional) – Index of the input of the cell to consider, default=0
- Returns:
The gradient computed by the cell.
- Return type:
Tensor
- get_inputs()¶
- Returns:
The input tensor of the cell.
- Return type:
Tensor
- get_outputs()¶
- Returns:
The output tensor of the cell.
- Return type:
Tensor
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- static is_exportable_to(export_name: str) bool ¶
- Parameters:
export_name (str) – Name of the export
- Returns:
True
if the cell is exportable to theexport_name
export.- Return type:
bool
- set_activation(activation: ActivationFunction)¶
Set an activation function to the N2D2 object and update config parameter of the n2d2 object.
- Parameters:
activation (
n2d2.activation.ActivationFunction
) – The activation function to set.
Saving parameters¶
You can save the parameters (weights, biases …) of your network with the method export_free_parameters. To load those parameters you can use the method import_free_parameters.
With n2d2 you can choose wether you want to save the parameters of a part of your network or of all your graph.
Object |
Save parameters |
Load parameters |
---|---|---|
|
|
|
|
|
Configuration section¶
If you want to add the same parameters to multiple cells, you can use a n2d2.ConfigSection
.
- class n2d2.ConfigSection(dict=None, /, **kwargs)¶
n2d2.ConfigSection
are used like dictionaries and passes to the constructor of classes like kwargs
.
Usage example¶
conv_config = n2d2.ConfigSection(no_bias=True)
n2d2.cells.Conv(3, 32, [4, 4], **conv_config)
This creates a n2d2.cells.Conv
with the parameter no_bias=True.
This functionality allow you to write more concise code, when multiple cells share the same parameters.
Warning
If you want to pass an object as a parameter for multiple n2d2 object. You need to create a wrapping function to create your object. Example :
def conv_def():
return n2d2.ConfigSection(weights_solver=n2d2.solver.SGD())
n2d2.cells.Conv(3, 32, [4, 4], **conv_def())
Mapping¶
You can change the mapping of the input for some cells (see if they have mapping
parameter available).
You can create a mapping manually with a n2d2.Tensor
object :
mapping=n2d2.Tensor([15, 24], datatype="bool")
mapping.set_values([
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1],
[0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1]])
Or use the Mapping object :
mapping=n2d2.mapping.Mapping(nb_channels_per_group=2).create_mapping(15, 24)
Which create the following mapping :
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
Solver¶
You can associate at construction and run time a n2d2.solver.Solver
object to a cell. This solver object will optimize the parameters of your cell using a specific algorithm.
Usage example¶
In this short example we will see how to associate a solver to a model and to a cell object at construction and at runtime.
Set solver at construction time¶
Let’s create a couple of n2d2.cells.Fc
cell and add them to a n2d2.cells.Sequence
.
At construction time we will set the solver of one of them to a n2d2.solver.SGD
with a learning_rate=0.1
.
import n2d2
cell1 = n2d2.cells.Fc(2,2, solver=n2d2.solver.SGD(learning_rate=0.1))
cell2 = n2d2.cells.Fc(2,2)
model = n2d2.cells.Sequence([cell1, cell2])
print(model)
Output :
'Sequence_0' Sequence(
(0): 'Fc_0' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.1, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), bias_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.1, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
(1): 'Fc_1' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.01, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), bias_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.01, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
)
Set a solver for a specific parameter¶
We can set a new solver for the bias of the second cell fully connected cell. This solver will be different than the weight parameter one.
Note
Here we access the cell via its instanciate object but we could have used its name : model["Fc_1"].bias_solver=n2d2.solver.Adam()
.
cell2.bias_solver=n2d2.solver.Adam()
print(model)
Output :
'Sequence_0' Sequence(
(0): 'Fc_0' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.1, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), bias_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.1, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
(1): 'Fc_1' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=SGD(clamping=, decay=0.0, iteration_size=1, learning_rate=0.01, learning_rate_decay=0.1, learning_rate_policy=None, learning_rate_step_size=1, max_iterations=0, min_decay=0.0, momentum=0.0, polyak_momentum=True, power=0.0, warm_up_duration=0, warm_up_lr_frac=0.25), bias_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.001), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
)
Set a solver for a model¶
We can set a solver to the whole n2d2.cells.Sequence
with the method n2d2.cells.Sequence.set_solver()
.
model.set_solver(n2d2.solver.Adam(learning_rate=0.1))
print(model)
Output :
'Sequence_0' Sequence(
(0): 'Fc_0' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.1), bias_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.1), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
(1): 'Fc_1' Fc(Frame<float>)(nb_inputs=2, nb_outputs=2 | back_propagate=True, drop_connect=1.0, no_bias=False, normalize=False, outputs_remap=, weights_export_format=OC, activation=None, weights_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.1), bias_solver=Adam(beta1=0.9, beta2=0.999, clamping=, epsilon=1e-08, learning_rate=0.1), weights_filler=Normal(mean=0.0, std_dev=0.05), bias_filler=Normal(mean=0.0, std_dev=0.05), quantizer=None)
)
SGD¶
- class n2d2.solver.SGD(**config_parameters)¶
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
learning_rate (float, optional) – Learning rate, default=0.01
momentum (float, optional) – Momentum, default=0.0
decay (float, optional) – Decay, default=0.0
min_decay (float, optional) – Min decay, default=0.0
learning_rate_policy (str, optional) – Learning rate decay policy. Can be any of
None
,StepDecay
,ExponentialDecay
,InvTDecay
, default=’None’learning_rate_step_size (int, optional) – Learning rate step size (in number of stimuli), default=1
learning_rate_decay (float, optional) – Learning rate decay, default=0.1
clamping (str, optional) – Weights clamping, format:
min:max
, or:max
, ormin:
, or empty, default=””model (str, optional) – Can be either
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_modeldatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Adam¶
- class n2d2.solver.Adam(**config_parameters)¶
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
learning_rate (float, optional) – Learning rate, default=0.01
beta1 (float, optional) – Exponential decay rate of these moving average of the first moment, default=0.9
beta2 (float, optional) – Exponential decay rate of these moving average of the second moment, default=0.999
epsilon (float, optional) – Epsilon, default=1.0e-8
clamping (str, optional) – Weights clamping, format:
min:max
, or:max
, ormin:
, or empty, default=””model (str, optional) – Can be either
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_modeldatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Filler¶
You can associate to a cell at construction time a n2d2.filler.Filler
object. This object will fill weights and biases using a specific method.
Usage example¶
In this short example we will see how to associate a filler to a cell object, how to get the weights and biases and how to set a new filler and refill the weights.
Setting a filler at construction time¶
We begin by importing n2d2
and creating a n2d2.cells.Fc
object. We will associate a n2d2.filler.Constant
filler.
Note
If you want to set a filler only for weights (or biases) you could have used the parameter weight_filler
(or bias_filler
).
import n2d2
cell = n2d2.cells.Fc(2,2, filler=n2d2.filler.Constant(value=1.0))
If you print the weights, you will see that they are all set to one.
print("--- Weights ---")
for channel in cell.get_weights():
for value in channel:
print(value)
Output :
--- Weights ---
n2d2.Tensor([
1
], device=cpu, datatype=f)
n2d2.Tensor([
1
], device=cpu, datatype=f)
n2d2.Tensor([
1
], device=cpu, datatype=f)
n2d2.Tensor([
1
], device=cpu, datatype=f)
Same with the biases
print("--- Biases ---")
for channel in cell.get_biases():
print(channel)
Output :
--- Biases ---
n2d2.Tensor([
1
], device=cpu, datatype=f)
n2d2.Tensor([
1
], device=cpu, datatype=f)
Changing the filler of an instanciated object¶
You can set a new filler for bias by changing the bias_filler
attribute (or weight_filler
for only weights or filer
for both).
However changing the filler doesn’t change the parameter values, you need to use the method n2d2.cells.Fc.refill_bias()
(see also n2d2.cells.Fc.refill_weights()
)
Note
You can also use the method n2d2.cells.Fc.set_filler()
, n2d2.cells.Fc.set_weights_filler()
and n2d2.cells.Fc.set_biases_filler()
. Which have a refill option.
cell.bias_filler=n2d2.filler.Normal()
cell.refill_bias()
You can then observe the new biases :
print("--- New Biases ---")
for channel in cell.get_biases():
print(channel)
Output :
--- New Biases ---
n2d2.Tensor([
1.32238
], device=cpu, datatype=f)
n2d2.Tensor([
-0.0233932
], device=cpu, datatype=f)
He¶
- class n2d2.filler.He(**config_parameters)¶
Fill with an normal distribution with normalized variance taking into account the rectifier nonlinearity [HZRS15]. This filler is sometimes referred as MSRA filler or Kaiming initialization.
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
variance_norm (str, optional) – Normalization, can be
FanIn
,Average
orFanOut
, default=’FanIn’scaling (float, optional) – Scaling factor, default=1.0
mean_norm (float, optional) –
datatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatype
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Normal¶
- class n2d2.filler.Normal(**config_parameters)¶
Fill with a normal distribution.
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
mean (float, optional) – Mean value of the distribution, default=0.0
std_dev (float, optional) – Standard deviation of the distribution, default=1.0
datatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatype
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Constant¶
- class n2d2.filler.Constant(**config_parameters)¶
Fill with a constant value.
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
value (float, optional) – Value for the filling, default=0.0
datatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatype
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Xavier¶
- class n2d2.filler.Xavier(**config_parameters)¶
Fill with an uniform distribution with normalized variance [GB10].
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
variance_norm (str, optional) – Normalization, can be
FanIn
,Average
orFanOut
, default=’FanIn’distribution (str, optional) – Distribution, can be
Uniform
orNormal
, default=’Uniform’scaling (float, optional) – Scaling factor, default=1.0
datatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatype
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Activations¶
You can associate to some cell an activation function.
- class n2d2.activation.ActivationFunction(**config_parameters)¶
- N2D2()¶
Return the N2D2 object.
- abstract __init__(**config_parameters)¶
- Parameters:
quantizer (
n2d2.quantizer.ActivationQuantizer
, optional) – Quantizer
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Linear¶
- class n2d2.activation.Linear(**config_parameters)¶
Linear activation function.
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
quantizer (
n2d2.quantizer.ActivationQuantizer
, optional) – Quantizerdatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Rectifier¶
- class n2d2.activation.Rectifier(**config_parameters)¶
Rectifier or ReLU activation function.
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
leak_slope (float, optional) – Leak slope for negative inputs, default=0.0
clipping (float, optional) – Clipping value for positive outputs, default=0.0
quantizer (
n2d2.quantizer.ActivationQuantizer
, optional) – Quantizerdatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Tanh¶
- class n2d2.activation.Tanh(**config_parameters)¶
Tanh activation function.
Computes \(y = tanh(\alpha.x)\).
- N2D2()¶
Return the N2D2 object.
- __init__(**config_parameters)¶
- Parameters:
alpha (float, optional) – \(\alpha\) parameter, default=1.0
quantizer (
n2d2.quantizer.ActivationQuantizer
, optional) – Quantizerdatatype (str, optional) – Datatype used by the object, can only be
float
at the moment, default=n2d2.global_variables.default_datatypemodel (str, optional) – Specify the kind of object to run, can be
Frame
orFrame_CUDA
, default=n2d2.global_variables.default_model
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
Target¶
Last cell of the network this object computes the loss.
To understand what the Target does, please refer to this part of the documentation : Target INI.
- class n2d2.target.Score(provider, **config_parameters)¶
- N2D2()¶
Return the N2D2 object.
- __init__(provider, **config_parameters)¶
- Parameters:
provider (
n2d2.provider.Provider
) – Provider containing the input and output data.name (str, optional) – Target name, default=
Target_id
target_value (float, optional) – Target value for the target output neuron(s) (for classification), default=1.0
default_value (float, optional) – Default value for the non-target output neuron(s) (for classification), default=0.0
top_n (int, optional) – The top-N estimated targets per output neuron to save, default=1
labels_mapping (str, optional) – Path to the file containing the labels to target mapping, default=””
create_missing_labels (bool, optional) – If
True
, labels present in the labels mapping file but that are non-existent in the database are created (with 0 associated stimuli), default=False
- clear_score()¶
Clear the cached scores.
- clear_success()¶
Clear the cached success.
- get_average_score(metric)¶
- Parameters:
metric (string) – Can be any of :
Sensitivity
,Specificity
,Precision
,NegativePredictive
,Value
,MissRate
,FallOut
,FalseDiscoveryRate
,FalseOmissionRate
,Accuracy
,F1Score
,Informedness
,Markedness
,IU
.
- get_average_top_n_success(window=0)¶
This only works if TopN > 1, otherwise it returns 0!
- get_loss()¶
Return full loss vector of all batches
- get_parameter(key)¶
- Parameters:
key (str) – Parameter name
- log_confusion_matrix(file_name)¶
Log the confusion matrix of the previous inference done on a data partition selected by the provider (see
n2d2.provider.get_partition()
).- Parameters:
file_name (str) – File name of the confusion matrix, it will be saved in <self.name>.Target/ConfusionMatrix_<file_name>_score.png.
- log_stats(path)¶
Export statistics of the graph
- Parameters:
dirname (str) – path to the directory where you want to save the data.
- log_success(path)¶
Save a graph of the loss and the validation score as a function of the step number.
- loss()¶
Return loss of last batch
Usage example¶
How to use a Target to train your model :
# Propagation & BackPropagation example
output = model(stimuli)
loss = target(output)
loss.back_propagate()
loss.update()
Log performance analysis of your training :
### After validation ###
# save computational stats of the network
target.log_stats("name")
# save a confusion matrix
target.log_confusion_matrix("name")
# save a graph of the loss and the validation score as a function of the number of steps
target.log_success("name")