Tensor¶
Introduction¶
n2d2.Tensor
is a wrapper of the Tensor
object available in N2D2 (see Tensor).
The class n2d2.Tensor
contains a reference to the element which produce it and can be seen as the edge of the computation graph.
Tensor¶
- class n2d2.Tensor(dims: list | tuple, value: Any = None, cuda: bool = False, datatype: str = 'float', cell: Any = None, dim_format: str = 'Numpy')¶
- N2D2()¶
- Returns:
The N2D2 tensor object
- Return type:
- __init__(dims: list | tuple, value: Any = None, cuda: bool = False, datatype: str = 'float', cell: Any = None, dim_format: str = 'Numpy')¶
- Parameters:
dims (list) – Dimensions of the
n2d2.Tensor
object. (the convention used depends of thedim_format
argument, by default it’s the same asNumpy
)value (Must be coherent with
datatype
) – A value to fill then2d2.Tensor
object.datatype (str, optional) – Type of the data stored in the tensor, default=”float”
cell (
n2d2.cells.NeuralNetworkCell
, optional) – A reference to the object that created this tensor, default=Nonedim_format (str, optional) – Define the format used when you declare the dimensions of the tensor. The
N2D2
convention is the reversed of theNumpy
the numpy one (e.g. a [2, 3] numpy array is equivalent to a [3, 2] N2D2 Tensor), default=”Numpy”
- back_propagate() None ¶
Compute the backpropagation on the deepnet.
- copy()¶
Copy in memory the Tensor object.
- cpu()¶
Convert the tensor to a cpu tensor
- cuda()¶
Convert the tensor to a cuda tensor
- data_type()¶
Return the data type of the object stored by the tensor.
- detach()¶
Detach the cells from the tensor, thereby removing all information about the computation graph/deepnet object.
- dims()¶
Return dimensions with N2D2 convention
- draw_associated_graph(path: str) None ¶
Plot the graph in a figure located at path.
- Parameters:
path (str) – Path were to save the plotted graph.
- dtoh()¶
Synchronize Device to Host. CUDA tensor are stored and computed in the GPU (Device). You cannot read directly the GPU. A copy of the tensor exist in the CPU (Host)
- classmethod from_N2D2(N2D2_Tensor)¶
Convert an N2D2 tensor into a Tensor.
- Parameters:
N2D2_Tensor (
N2D2.BaseTensor
orN2D2.CudaBaseTensor
) – An N2D2 Tensor to convert to a n2d2 Tensor.- Returns:
Converted tensor
- Return type:
- classmethod from_numpy(np_array)¶
Convert a numpy array into a tensor.
- Parameters:
np_array (
numpy.array
) – A numpy array to convert to a tensor.- Returns:
Converted tensor
- Return type:
- get_deepnet()¶
Method called by the cells, if the tensor is not part of a graph, it will be linked to an
n2d2.provider.Provider
object.- Returns:
The associated deepnet
- Return type:
n2d2.deepnet.DeepNet
- htod()¶
Synchronize Host to Device. CUDA tensor are stored and computed in the GPU (Device). You cannot read directly the GPU. A copy of the tensor exist in the CPU (Host)
- nb_dims()¶
Return the number of dimensions.
- reshape(new_dims: list)¶
Reshape the Tensor to the specified dims (defined by the Numpy convention).
- Parameters:
new_dims (list) – New dimensions
- resize(new_dims: list)¶
Reshape the Tensor to the specified dims (defined by the Numpy convention).
- Parameters:
new_dims (list) – New dimensions
- set_values(values)¶
Fill the tensor with a list of values.
tensor = n2d2.Tensor([1, 1, 2, 2]) input_tensor.set_values([[[[1,2], [3, 4]]]])
- Parameters:
values (list) – A nested list that represent the tensor.
- shape()¶
Return dimensions with python convention
- to_numpy(copy: bool = False)¶
Create a numpy array equivalent to the tensor.
- Parameters:
copy (Boolean, optional) – if false, memory is shared between
n2d2.Tensor
andnumpy.array
, else data are copied in memory, default=True
- update() None ¶
Update weights and biases of the cells.
Manipulating tensors¶
For setting and getting value we will be using the following tensor as an example :
tensor = n2d2.Tensor([2, 3])
0 0 0
0 0 0
You can set and get values using :
Coordinates¶
tensor[1,0] = 1 # Using coordinates
value = tensor[1,0]
If you print the tensor you will see :
0 0 0
1 0 0
Index¶
You can use an index to get or set elements of a tensor. The index correspond to the flatten representation of your tensor.
tensor[0] = 2
value = tensor[0]
If you print the tensor you will see :
2 0 0
0 0 0
Slice¶
Note
Slice are supported only for assignment !
tensor[1:3] = 3
If you print the tensor you will see :
0 3 3
0 0 0
Set values method¶
If you want to set multiple values easily, you can use the method n2d2.Tensor.set_values()
tensor.set_values([[1,2,3], [4,5,6]])
If you print the tensor you will see :
1 2 3
4 5 6
Numpy¶
To Numpy¶
You can create a numpy.array
using a n2d2.Tensor
with the class method : n2d2.Tensor.to_numpy()
tensor = n2d2.Tensor([2, 3])
np_array = tensor.to_numpy()
This will create the following tensor :
0 0 0
0 0 0
By default the numpy.array
doesn’t create a memory copy meaning that if you want to manipulate a n2d2.Tensor
you can use the numpy library.
np_array[0] = 1
print(tensor)
1 1 1
0 0 0
Note
If you do not want to create a memory copy, you should set the parameter copy=True
.
np_array = tensor.to_numpy(copy=True)
From Numpy¶
You can create a n2d2.Tensor
using a numpy.array
with the class method : n2d2.Tensor.from_numpy()
np_array = numpy.array([[1,2,3], [4,5,6]])
tensor = n2d2.Tensor.from_numpy(np_array)
This will create the following tensor :
1 2 3
4 5 6
Note
You cannot create a n2d2.Tensor
from a numpy.array
without a memory copy because Tensor require a contiguous memory space which is not required for an array.
CUDA Tensor¶
You can store your tensor with CPU or GPU (using CUDA
). By default, n2d2 creates a CPU tensor.
If you want to create a CUDA
Tensor you can do so by setting the parameter cuda
to True in the constructor
tensor = n2d2.Tensor([2,3], cuda=True)
You can switch from CPU to GPU at anytime :
tensor.cpu() # Converting to a CPU tensor
tensor.cuda() # Converting to a CUDA tensor
When working on a CUDA
tensor you have to understand that they are stored in two different places.
The host and the device. The device is the GPU. The host correspond to your interface with the tensor that exists in the GPU. You cannot access the device directly, the GPU don’t have input/output functions.
This is why you have two methods to synchronized these two versions (n2d2.Tensor.htod()
and n2d2.Tensor.dtoh()
).
Synchronizing the device and the host can be an important overhead, it is recommended to compute everything on the device and to synchronize the host at the end.
Synchronization example¶
Let’s consider the following CUDA
Tensor :
t = n2d2.Tensor([2, 2], cuda=True)
0 0
0 0
We set the following values :
t.set_values([[1, 2], [3, 4]])
1 2
3 4
Then we will synchronized the device with the host. this mean that we send the values to the GPU.
t.htod()
1 2
3 4
As you can see, nothing change when printing the tensor. We have updated the GPU with the new values. Now let’s change the values stored in the tensor :
t.set_values([[2, 3], [4, 5]])
2 3
4 5
When printing the tensor we see the new values we just set. Now let’s synchronize the host with the device !
t.dtoh()
1 2
3 4
As you can see when printing the tensor, we now have the old values of the tensor.