Few words about PyTorch Datasets
This short preamble will briefly go through the basic notions of Dataset offered natively by PyTorch. A solid grasp of these notions are needed to understand:
How PyTorch data loading works in general
How AvalancheDatasets differs from PyTorch Datasets
In PyTorch, a Dataset
is a class exposing two methods:
__len__()
, which returns the amount of instances in the dataset (as an int
).
__getitem__(idx)
, which returns the data point at index idx
.
In other words, a Dataset instance is just an object for which, similarly to a list, one can simply:
Obtain its length using the Python len(dataset)
function.
Obtain a single data point using the x, y = dataset[idx]
syntax.
The content of the dataset can be either loaded in memory when the dataset is instantiated (like the torchvision MNIST dataset does) or, for big datasets like ImageNet, the content is kept on disk, with the dataset keeping the list of files in an internal field. In this case, data is loaded from the storage on-the-fly when __getitem__(idx)
is called. The way those things are managed is specific to each dataset implementation.
The PyTorch library offers 4 Dataset implementations:
Dataset
: an interface defining the __len__
and __getitem__
methods.
TensorDataset
: instantiated by passing X and Y tensors. Each row of the X and Y tensors is interpreted as a data point. The __getitem__(idx)
method will simply return the idx
-th row of X and Y tensors.
ConcatDataset
: instantiated by passing a list of datasets. The resulting dataset is a concatenation of those datasets.
Subset
: instantiated by passing a dataset and a list of indices. The resulting dataset will only contain the data points described by that list of indices.
As explained in the mini How-Tos, Avalanche offers a customized version for all these 4 datasets.
Most datasets from the torchvision libraries (as well as datasets found "in the wild") allow for a transformation
function to be passed to the dataset constructor. The support for transformations is not mandatory for a dataset, but it is quite common to support them. The transformation is used to process the X value of a data point before returning it. This is used to normalize values, apply augmentations, etcetera.
As explained in the mini How-Tos, the AvalancheDataset
class implements a very rich and powerful set of functionalities for managing transformations.
A variation of the standard Dataset
exist in PyTorch: the IterableDataset. When using an IterableDataset
, one can load the data points in a sequential way only (by using a tape-alike approach). The dataset[idx]
syntax and len(dataset)
function are not allowed. Avalanche does NOT support IterableDataset
s. You shouldn't worry about this because, realistically, you will never encounter such datasets.
The Dataset
is a very simple object that only returns one data point given its index. In order to create minibatches and speed-up the data loading process, a DataLoader
is required.
The PyTorch DataLoader
class is a very efficient mechanism that, given a Dataset
, will return minibatches by optonally shuffling data brefore each epoch and by loading data in parallel by using multiple workers.
To wrap-up, let's see how the native, non-Avalanche, PyTorch components work in practice. In the following code we create a TensorDataset
and then we load it in minibatches using a DataLoader
.
With these notions in mind, you can start start your journey on understanding the functionalities offered by the AvalancheDatasets by going through the Mini How-Tos.
Please refer to the list of the Mini How-Tos regarding AvalancheDatasets for a complete list. It is recommended to start with the "Creating AvalancheDatasets" Mini How-To.
Dealing with AvalancheDatasets
The AvalancheDataset
is an implementation of the PyTorch Dataset
class that comes with many useful out-of-the-box functionalities. For most users, the AvalancheDataset can be used as a plain PyTorch Dataset that will return x, y, t
elements. However, the AvalancheDataset is much more powerful than a simple PyTorch Dataset.
A serie of Mini How-Tos will guide you through the functionalities of the AvalancheDataset and its subclasses:
Brefore jumping to the actual Mini How-Tos, we recommend having a look at the basic notions of Dataset and DataLoader by reading the Preamble page.
Creation and manipulation of AvalancheDatasets and its subclasses.
The AvalancheDataset is an implementation of the PyTorch Dataset class which comes with many out-of-the-box functionalities. The AvalancheDataset (an its few subclass) are extensively used through the whole Avalanche library as the reference way to manipulate datasets:
The dataset carried by the experience.dataset
field is always an AvalancheDataset.
Benchmark creation functions accept AvalancheDatasets to create benchmarks where a finer control over task labels is required.
Internally, benchmarks are created by manipulating AvalancheDatasets.
This first Mini How-To will guide through the main ways you can use to instantiate an AvalancheDataset while the other Mini How-Tos (complete list here) will show how to use its functionalities.
It is warmly recommended to run this page as a notebook using Colab (info at the bottom of this page).
Let's start by installing avalanche:
This mini How-To will guide you through the main ways used to instantiate an AvalancheDataset.
First thing: the base class AvalancheDataset
is a wrapper for existing datasets. Only two things must be considered when wrapping an existing dataset:
Apart from the x and y values, the resulting AvalancheDataset will also return a third value: the task label (which defaults to 0).
The wrapped dataset must contain a valid targets field.
The targets field is available is nearly all torchvision datasets. It must be a list containing the label for each data point (usually the y value). In this way, Avalanche can use that field when instantiating benchmarks like the "Class/Task-Incremental* and Domain-Incremental ones.
Avalanche exposes 4 classes of AvalancheDatasets which map exactly the 4 Dataset classes offered by PyTorch:
AvalancheDataset
: the base class, which acts a wrapper to existing Dataset instances.
AvalancheTensorDataset
: equivalent to PyTorch TesnsorDataset
.
AvalancheSubset
: equivalent to PyTorch Subset
.
AvalancheConcatDataset
: equivalent to PyTorch ConcatDataset
.
Given a dataset (like MNIST), an AvalancheDataset can be instantiated as follows:
Just like any other Dataset, a data point can be obtained using the x, y = dataset[idx]
syntax. When obtaining a data point from an AvalancheDataset, an additional third value (the task label) will be returned:
Useful tip: if you are not sure if you are dealing with a PyTorch Dataset or an AvalancheDataset, or if you want to ignore task labels, you can use this syntax:
The PyTorch TensorDataset is one of the most useful Dataset classes as it can be used to quickly prototype the data loading part of your code.
A TensorDataset can be wrapped in an AvalancheDataset just like any Dataset, but this is not much convenient, as shown below:
Instead, it is recommended to use the AvalancheTensorDataset class to get the same result. In this way, you can just skip one intermediate step.
In both cases, AvalancheDataset will automatically populate its targets field by using the values from the second Tensor (which usually contains the Y values). This behaviour can be customized by passing a custom targets
constructor parameter (by either passing a list of targets or the index of the Tensor to use).
The cell below shows the content of the target field of the dataset created in the cell above. Notice that the targets field has been filled with the content of the second Tensor (y_data).
Avalanche offers the AvalancheSubset
and AvalancheConcatDataset
implementations that extend the functionalities of PyTorch Subset and ConcatDataset.
Regarding the subsetting operation, AvalancheSubset
behaves in the same way the PyTorch Subset
class does: both implementations accept a dataset and a list of indices as parameters. The resulting Subset is not a copy of the dataset, it's just a view. This is similar to creating a view of a NumPy array by passing a list of indexes using the numpy_array[list_of_indices]
syntax. This can be used to both create a smaller dataset and to change the order of data points in the dataset.
Here we create a toy dataset in which each X and Y values are ints. We then obtain a subset of it by creating an AvalancheSubset:
Concatenation is even simpler. Just like with PyTorch ConcatDataset, one can easily concatentate datasets with AvalancheConcatDataset.
Both AvalancheConcatDataset and PyTorch ConcatDataset accept a list of datasets to concatenate.
This Mini How-To showed you how to create instances of AvalancheDataset (and its subclasses).
Other Mini How-Tos will guide you through the functionalities offered by AvalancheDataset. The list of Mini How-Tos can be found here.
Dealing with transformations (groups, appending, replacing, freezing).
AvalancheDataset (and its subclasses like the AvalancheTensor/Subset/ConcatDataset) allow for a finer control over transformations. While torchvision (and other) datasets allow for a minimal mechanism to apply transformations, with AvalancheDataset one can:
Have multiple transformation "groups" in the same dataset (like separated train and test transformations).
Append, replace and remove transformations, even by using nested Subset/Concat Datasets.
Freeze transformations, so that they can't be changed.
The following sub-sections show examples on how to use these features. Please note that all the constructor parameters and the methods described in this How-To can be used on AvalancheDataset subclasses as well. For more info on all the available subclasses, refer to this Mini How-To.
It is warmly recommended to run this page as a notebook using Colab (info at the bottom of this page).
Let's start by installing Avalanche:
AvalancheDatasets can contain multiple transformation groups. This can be useful to keep train and test transformations in the same dataset and to have different set of transformations. This may come in handy in many situations (for instance, to apply ad-hoc transformations to replay data).
As in torchvision datasets, AvalancheDataset supports the two kind of transformations: the transform
, which is applied to X values, and the target_transform
, which is applied to Y values. The latter is rarely used. This means that a transformation group is a pair of transformations to be applied to the X and Y values of each instance returned by the dataset. In both torchvision and Avalanche implementations, a transformation must be a function (or other callable object) that accepts one input (the X or Y value) and outputs its transformed version. This pair of functions is stored in the transform
and target_transform
fields of the dataset. A comprehensive guide on transformations can be found in the torchvision documentation.
In the following example, a MNIST dataset is created and then wrapped in an AvalancheDataset. When creating the AvalancheDataset, we can set train and eval transformations by passing a transform_groups parameter. Train transformations usually include some form of random augmentation, while eval transformations usually include a sequence of deterministic transformations only. Here we define the sequence of train transformations as a random rotation followed by the ToTensor operation. The eval transformations only include the ToTensor operation.
Of course, one can also just use the transform
and target_transform
constructor parameters to set the transformations for both the train and the eval groups. However, it is recommended to use the approach based on transform_groups (shown in the code above) as it is much more flexible.
.train()
and .eval()
The default behaviour of the AvalancheDataset is to use transformations from the train group. However, one can easily obtain a version of the dataset where the eval group is used. Note: when obtaining the dataset of experiences from the test stream, those datasets will already be using the eval group of transformations so you don't need to switch to the eval group ;).
As noted before, transformations for the current group are loaded in the transform
and target_transform
fields. These fields can be changed directly, but this is NOT recommended, as this will not create a copy of the dataset and may probably affect other parts of the code in which the dataset is used.
The recommended way to switch between the train and eval groups is to use the .train()
and .eval()
methods to obtain a copy (view) of the dataset with the proper transformations enabled. This is another very handy feature of the AvalancheDataset: methods that manipulate the AvalancheDataset fields (and transformations) always create a view of the dataset. The original dataset is never changed.
In the following cell we use the avl_mnist_transform dataset created in the cells above. We first obtain a view of it in which eval transformations are enabled. Then, starting from this view, we obtain a version of it in which train transformations are enabled. We want to double-stress that .train()
and .eval()
never change the group of the dataset on which they are called: they always create a view.
One can check that the correct transformation group is in use by looking at the content of the transform/target_transform fields.
In AvalancheDatasets the train and eval transformation groups are always available. However, AvalancheDataset also supports custom transformation groups.
The following example shows how to create an AvalancheDataset with an additional group named replay. We define the replay transformation as a random crop followed by the ToTensor operation.
However, once created the dataset will use the train group. There are two ways to switch to our custom group:
Set the group when creating the dataset using the initial_transform_group
constructor parameter
Switch to the group using the .with_transforms(group_name)
method
The .with_transforms(group_name)
method behaves in the same way .train()
and .eval()
do by creating a view of the original dataset.
The following example shows how to use both methods:
In the standard torchvision datasets the only way to append (that is, add a new transformation step to the list of existing one) is to change the transform field directly by doing something like this: