Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A Brief Introduction to Avalanche
Avalanche was born within ContinualAI with a clear goal in mind:
Pushing Continual Learning to the next level, providing a shared and collaborative library for fast prototyping, training and reproducible evaluation of continual learning algorithms.
As a powerful avalanche, a Continual Learning agent incrementally improves its knowledge and skills over time, building upon the previously acquired ones and learning how to interact with the external world.
We hope Avalanche may trigger the same positive reinforcement loop within our community, moving towards a more collaborative and inclusive way of doing research and helping us tackle bigger problems, faster and better, but together! 👪
Avalanche has several advantages:
Shared & Coherent Codebase: Aren't you tired of re-inventing the wheel in continual learning? We are. Re-producing paper results has always been daunting in machine learning and it is even more so in continual learning. Avalanche makes you stop re-write your (and other people) code all over again with a coherent and shared codebase that provides already all the utilities, benchmark, metrics and baselines you may need for your next great continual learning research project!
Errors Reduction: The more code we write, the more bugs we introduce in our code. This is the rule, not the exception. Avalanche, let you focus on what really matters: defining your CL solution. Benchmarks preparation to training, evaluation and comparison with other methods will be already there for you. This in turn, massively reduce the amount of errors introduced and the time needed to debug your code.
Faster Prototyping: As researchers or data scientists, we have dozens ideas every day and time is always too little to execute them. However, if we think about it, most of the time spent in bringing our ideas to life is consumed in installing software, preparing and cleaning our data, preparing the experiments code infrastructure and so on. Avalanche lets you focus just on the original algorithmic proposal, taking care of most of the rest!
Improved Reproducibility & Portability: One of the great features of Avalanche, is the possibility of reproducing experimental results easily and on any OS. Researchers can simply plug-in their algorithm into the codebase and see how it goes with respect of other researchers' methods. Their algorithm in turn, is used as a baseline for other methods, creating a virtuous circle. This is only possible thanks to the simple, yet powerful idea of providing shared benchmarks, training and evaluation in a single place.
Improved Modularity: Avalanche has been designed with modularity in mind. As you will learn more about Avalanche, you will realize we have sometimes forego simplicity in favor of modularity and reusability (we hate code replication as you do 🤪). However, we believe this will help us scale in the near future as we collaboratively bring this codebase into maturity.
Increased Efficiency & Scalability: Full-stack researchers & data scientists know this, making your algorithm memory and computationally efficient is tough. Avalanche is already optimized for you, so that you can run your ImageNet continual learning experiment on your 8GB laptop (buy a cooling fan 💨) or even try it on embedded devices of your latest product!
But most of all, Avalanche, can help us standardize our field and work better together, more collaboratively, towards our shared goal of making machines learn over time like humans do.
Avalanche the first experiment of a End-to-end Library for reproducible continual learning research where you can find benchmarks, algorithms, evaluation utilities and much more in the same place.
Let's make it together 👫 a wonderful ride! 🎈
First things first: let's start with a good model!
Welcome to the "Models" tutorial of the "From Zero to Hero" series. In this notebook we will talk about the features offered by the models
Avalanche sub-module.
Every continual learning experiment needs a model to train incrementally. You can use any torch.nn.Module
, even pretrained models. The models
sub-module provides the most commonly used architectures in the CL literature.
You can use any model provided in the Pytorch official ecosystem models as well as the ones provided by pytorchcv!
A continual learning model may change over time. As an example, a classifier may add new units for previously unseen classes, while progressive networks add a new set units after each experience. Avalanche provides DynamicModule
s to support these use cases. DynamicModule
s are torch.nn.Module
s that provide an addition method, adaptation
, that is used to update the model's architecture. The method takes a single argument, the data from the current experience.
For example, an IncrementalClassifier updates the number of output units:
As you can see, after each call to the adaptation
method, the model adds 2 new units to account for the new classes. Notice that no learning occurs at this point since the method only modifies the model's architecture.
Keep in mind that when you use Avalanche strategies you don't have to call the adaptation yourself. Avalanche strategies automatically call the model's adaptation and update the optimizer to include the new parameters.
Some models, such as multi-head classifiers, are designed to exploit task labels. In Avalanche, such models are implemented as MultiTaskModule
s. These are dynamic models (since they need to be updated whenever they encounter a new task) that have an additional task_labels
argument in their forward
method. task_labels
is a tensor with a task id for each sample.
When you use a MultiHeadClassifier
, a new head is initialized whenever a new task is encountered. Avalanche strategies automatically recognize multi-task modules and provide task labels to them.
If you want to define a custom multi-task module you need to override two methods: adaptation
(if needed), and forward_single_task
. The forward
method of the base class will split the mini-batch by task-id and provide single task mini-batches to forward_single_task
.
Alternatively, if you only want to convert a single-head model into a multi-head model, you can use the as_multitask
wrapper, which converts the model for you.
Continual Learning Algorithms Prototyping Made Easy
Welcome to the "Training" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the training
module.
First, let's install Avalanche. You can skip this step if you have installed it already.
The training
module in Avalanche is designed with modularity in mind. Its main goals are to:
provide a set of popular continual learning baselines that can be easily used to run experimental comparisons;
provide simple abstractions to create and run your own strategy as efficiently and easily as possible starting from a couple of basic building blocks we already prepared for you.
At the moment, the training
module includes three main components:
Templates: these are high level abstractions used as a starting point to define the actual strategies. The templates contain already implemented basic utilities and functionalities shared by a group of strategies (e.g. the BaseSGDTemplate
contains all the implemented methods to deal with strategies based on SGD).
Strategies: these are popular baselines already implemented for you which you can use for comparisons or as base classes to define a custom strategy.
Plugins: these are classes that allow adding some specific behavior to your own strategy. The plugin system allows defining reusable components which can be easily combined (e.g. a replay strategy, a regularization strategy). They are also used to automatically manage logging and evaluation.
Keep in mind that many Avalanche components are independent of Avalanche strategies. If you already have your own strategy which does not use Avalanche, you can use Avalanche's benchmarks, models, data loaders, and metrics without ever looking at Avalanche's strategies!
If you want to compare your strategy with other classic continual learning algorithms or baselines, in Avalanche you can instantiate a strategy with a couple of lines of code.
Most strategies require only 3 mandatory arguments:
model: this must be a torch.nn.Module
.
optimizer: torch.optim.Optimizer
already initialized on your model
.
loss: a loss function such as those in torch.nn.functional
.
Additional arguments are optional and allow you to customize training (batch size, number of epochs, ...) or strategy-specific parameters (memory size, regularization strength, ...).
Each strategy object offers two main methods: train
and eval
. Both of them, accept either a single experience(Experience
) or a list of them, for maximum flexibility.
We can train the model continually by iterating over the train_stream
provided by the scenario.
Most continual learning strategies follow roughly the same training/evaluation loops, i.e. a simple naive strategy (a.k.a. finetuning) augmented with additional behavior to counteract catastrophic forgetting. The plugin systems in Avalanche is designed to easily augment continual learning strategies with custom behavior, without having to rewrite the training loop from scratch. Avalanche strategies accept an optional list of plugins
that will be executed during the training/evaluation loops.
For example, early stopping is implemented as a plugin:
In Avalanche, most continual learning strategies are implemented using plugins, which makes it easy to combine them together. For example, it is extremely easy to create a hybrid strategy that combines replay and EWC together by passing the appropriate plugins
list to the SupervisedTemplate
:
Beware that most strategy plugins modify the internal state. As a result, not all the strategy plugins can be combined together. For example, it does not make sense to use multiple replay plugins since they will try to modify the same strategy variables (mini-batches, dataloaders), and therefore they will be in conflict.
If you arrived at this point you already know how to use Avalanche strategies and are ready to use it. However, before making your own strategies you need to understand a little bit the internal implementation of the training and evaluation loops.
In Avalanche you can customize a strategy in 2 ways:
Plugins: Most strategies can be implemented as additional code that runs on top of the basic training and evaluation loops (e.g. the Naive
strategy). Therefore, the easiest way to define a custom strategy such as a regularization or replay strategy, is to define it as a custom plugin. The advantage of plugins is that they can be combined, as long as they are compatible, i.e. they do not modify the same part of the state. The disadvantage is that in order to do so you need to understand the strategy loop, which can be a bit complex at first.
Subclassing: In Avalanche, continual learning strategies inherit from the appropriate template, which provides generic training and evaluation loops. The most high level template is the BaseTemplate
, from which all the Avalanche's strategies inherit. Most template's methods can be safely overridden (with some caveats that we will see later).
Keep in mind that if you already have a working continual learning strategy that does not use Avalanche, you can use most Avalanche components such as benchmarks
, evaluation
, and models
without using Avalanche's strategies!
As we already mentioned, Avalanche strategies inherit from the appropriate template (e.g. continual supervised learning strategies inherit from the SupervisedTemplate
). These templates provide:
Basic Training and Evaluation loops which define a naive (finetuning) strategy.
Callback points, which are used to call the plugins at a specific moments during the loop's execution.
A set of variables representing the state of the loops (current model, data, mini-batch, predictions, ...) which allows plugins and child classes to easily manipulate the state of the training loop.
The training loop has the following structure:
The evaluation loop is similar:
Methods starting with before/after
are the methods responsible for calling the plugins. Notice that before the start of each experience during training we have several phases:
dataset adaptation: This is the phase where the training data can be modified by the strategy, for example by adding other samples from a separate buffer.
dataloader initialization: Initialize the data loader. Many strategies (e.g. replay) use custom dataloaders to balance the data.
model adaptation: Here, the dynamic models (see the models
tutorial) are updated by calling their adaptation
method.
optimizer initialization: After the model has been updated, the optimizer should also be updated to ensure that the new parameters are optimized.
The strategy state is accessible via several attributes. Most of these can be modified by plugins and subclasses:
self.clock
: keeps track of several event counters.
self.experience
: the current experience.
self.adapted_dataset
: the data modified by the dataset adaptation phase.
self.dataloader
: the current dataloader.
self.mbatch
: the current mini-batch. For supervised classification problems, mini-batches have the form <x, y, t>
, where x
is the input, y
is the target class, and t
is the task label.
self.mb_output
: the current model's output.
self.loss
: the current loss.
self.is_training
: True
if the strategy is in training mode.
Plugins provide a simple solution to define a new strategy by augmenting the behavior of another strategy (typically the Naive
strategy). This approach reduces the overhead and code duplication, improving code readability and prototyping speed.
Creating a plugin is straightforward. As with strategies, you must create a class that inherits from the corresponding plugin template (BasePlugin
, BaseSGDPlugin
, SupervisedPlugin
) and implements the callbacks that you need. The exact callback to use depend on the aim of your plugin. You can use the loop shown above to understand what callbacks you need to use. For example, we show below a simple replay plugin that uses after_training_exp
to update the buffer after each training experience, and the before_training_exp
to customize the dataloader. Notice that before_training_exp
is executed after make_train_dataloader
, which means that the Naive
strategy already updated the dataloader. If we used another callback, such as before_train_dataset_adaptation
, our dataloader would have been overwritten by the Naive
strategy. Plugin methods always receive the strategy
as an argument, so they can access and modify the strategy's state.
The animation below shows the execution and callbacks steps of a Naive strategy that is extended with the EWC plugin:
Check base plugin's documentation for a complete list of the available callbacks.
You can always define a custom strategy by overriding the corresponding template methods. However, There is an important caveat to keep in mind. If you override a method, you must remember to call all the callback's handlers (the methods starting with before/after
) at the appropriate points. For example, train
calls before_training
and after_training
before and after the training loops, respectively. The easiest way to avoid mistakes is to start from the template's method that you want to override and modify it based on your own needs without removing the callbacks handling.
Notice that the EvaluationPlugin
(see evaluation
tutorial) uses the strategy callbacks.
As an example, the SupervisedTemplate
, for continual supervised strategies, provides the global state of the loop in the strategy's attributes, which you can safely use when you override a method. For instance, the Cumulative
strategy trains a model continually on the union of all the experiences encountered so far. To achieve this, the cumulative strategy overrides adapt_train_dataset
and updates `self.adapted_dataset' by concatenating all the previous experiences with the current one.
Easy, isn't it? :-)
In general, we recommend to implement a Strategy via plugins, if possible. This approach is the easiest to use and requires minimal knowledge of the strategy templates. It also allows other people to re-use your plugin and facilitates interoperability among different strategies.
For example, replay strategies can be implemented as a custom strategy or as plugins. However, creating a plugin allows using the replay in conjunction with other strategies or plugins, making it possible to combine different approaches to build the ultimate continual learning algorithm!
This completes the "Training" chapter for the "From Zero to Hero" series. We hope you enjoyed it!
Automatic Evaluation with Pre-implemented Metrics
Welcome to the "Evaluation" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the evaluation
module.
The evaluation
module is quite straightforward: it offers all the basic functionalities to evaluate and keep track of a continual learning experiment.
This is mostly done through the Metrics: a set of classes which implement the main continual learning metrics computation like A_ccuracy_, F_orgetting_, M_emory Usage_, R_unning Times_, etc. At the moment, in Avalanche we offer a number of pre-implemented metrics you can use for your own experiments. We made sure to include all the major accuracy-based metrics but also the ones related to computation and memory.
Each metric comes with a standalone class and a set of plugin classes aimed at emitting metric values on specific moments during training and evaluation.
As an example, the standalone Accuracy
class can be used to monitor the average accuracy over a stream of <input,target>
pairs. The class provides an update
method to update the current average accuracy, a result
method to print the current average accuracy and a reset
method to set the current average accuracy to zero. The call to result
does not change the metric state.
The TaskAwareAccuracy
metric keeps separate accuracy counters for different task labels. As such, it requires the task_labels
parameter, which specifies which task is associated with the current patterns. The metric returns a dictionary mapping task labels to accuracy values.
If you want to integrate the available metrics automatically in the training and evaluation flow, you can use plugin metrics, like EpochAccuracy
which logs the accuracy after each training epoch, or ExperienceAccuracy
which logs the accuracy after each evaluation experience. Each of these metrics emits a curve composed by its values at different points in time (e.g. on different training epochs). In order to simplify the use of these metrics, we provided utility functions with which you can create different plugin metrics in one shot. The results of these functions can be passed as parameters directly to the EvaluationPlugin
(see below).
We recommend to use the helper functions when creating plugin metrics.
The Evaluation Plugin is the object in charge of configuring and controlling the evaluation procedure. This object can be passed to a Strategy as a "special" plugin through the evaluator attribute.
The Evaluation Plugin accepts as inputs the plugin metrics you want to track. In addition, you can add one or more loggers to print the metrics in different ways (on file, on standard output, on Tensorboard...).
It is also recommended to pass to the Evaluation Plugin the benchmark instance used in the experiment. This allows the plugin to check for consistency during metrics computation. For example, the Evaluation Plugin checks that the strategy.eval
calls are performed on the same stream or sub-stream. Otherwise, same metric could refer to different portions of the stream.
These checks can be configured to raise errors (stopping computation) or only warnings.
To implement a standalone metric, you have to subclass Metric
class.
To implement a plugin metric you have to subclass PluginMetric
class
If you want to access all the metrics computed during training and evaluation, you have to make sure that collect_all=True
is set when creating the EvaluationPlugin
(default option is True
). This option maintains an updated version of all metric results in the plugin, which can be retrieved by calling evaluation_plugin.get_all_metrics()
. You can call this methods whenever you need the metrics.
The result is a dictionary with full metric names as keys and a tuple of two lists as values. The first list stores all the x
values recorded for that metric. Each x
value represents the time step at which the corresponding metric value has been computed. The second list stores metric values associated to the corresponding x
value.
Alternatively, the train
and eval
method of every strategy
returns a dictionary storing, for each metric, the last value recorded for that metric. You can use these dictionaries to incrementally accumulate metrics.
This completes the "Evaluation" tutorial for the "From Zero to Hero" series. We hope you enjoyed it!
Design Your Continual Learning Experiments
Welcome to the "Putting All Together" tutorial of the "From Zero to Hero" series. In this part we will summarize the major Avalanche features and how you can put them together for your continual learning experiments.
Here we report a complete example of the Avalanche usage:
Converting PyTorch Datasets to Avalanche Dataset
Datasets are a fundamental data structure for continual learning. Unlike offline training, in continual learning we often need to manipulate datasets to create streams, benchmarks, or to manage replay buffers. High-level utilities and predefined benchmarks already take care of the details for you, but you can easily manipulate the data yourself if you need to. These how-to will explain:
PyTorch datasets and data loading
How to instantiate Avalanche Datasets
AvalancheDataset features
In Avalanche, the AvalancheDataset
is everywhere:
The dataset carried by the experience.dataset
field is always an AvalancheDataset.
Many benchmark creation functions accept AvalancheDatasets to create benchmarks.
Avalanche benchmarks are created by manipulating AvalancheDatasets.
Replay buffers also use AvalancheDataset
to easily concanate data and handle transformations.
In PyTorch, a Dataset
is a class exposing two methods:
__len__()
, which returns the amount of instances in the dataset (as an int
).
__getitem__(idx)
, which returns the data point at index idx
.
In other words, a Dataset instance is just an object for which, similarly to a list, one can simply:
Obtain its length using the Python len(dataset)
function.
Obtain a single data point using the x, y = dataset[idx]
syntax.
The content of the dataset can be either loaded in memory when the dataset is instantiated (like the torchvision MNIST dataset does) or, for big datasets like ImageNet, the content is kept on disk, with the dataset keeping the list of files in an internal field. In this case, data is loaded from the storage on-the-fly when __getitem__(idx)
is called. The way those things are managed is specific to each dataset implementation.
To create an AvalancheDataset
from a PyTorch you only need to pass the original data to the constructor as follows
The dataset is equivalent to the original one:
Classification dataset
returns triplets of the form <x, y, t>, where t is the task label (which defaults to 0).
The wrapped dataset must contain a valid targets field.
Avalanche provides some utility functions to create supervised classification datasets such as:
make_tensor_classification_dataset
for tensor datasets all of these will automatically create the targets
and targets_task_labels
attributes.
While PyTorch provides two different classes for concatenation and subsampling (ConcatDataset
and Subset
), Avalanche implements them as dataset methods. These operations return a new dataset, leaving the original one unchanged.
AvalancheDataset allows to add attributes to datasets. Attributes are named arrays that carry some information that is propagated by concatenation and subsampling operations. For example, classification datasets use this functionality to manage class and task labels.
Thanks to DataAttribute
s, you can freely operate on your data (e.g. to manage a replay buffer) without losing class or task labels. This makes it easy to manage multi-task datasets or to balance datasets by class.
Most datasets from the torchvision libraries (as well as datasets found "in the wild") allow for a transformation
function to be passed to the dataset constructor. The support for transformations is not mandatory for a dataset, but it is quite common to support them. The transformation is used to process the X value of a data point before returning it. This is used to normalize values, apply augmentations, etcetera.
With these notions in mind, you can start start your journey on understanding the functionalities offered by the AvalancheDatasets by going through the Mini How-Tos.
Create your Continual Learning Benchmark and Start Prototyping
Welcome to the "benchmarks" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the Benchmarks
module.
Avalanche Benchmarks provide the data that you will for training and evaluating your model. Benchmarks have the following structure:
A Benchmark
is a collection of streams. Most benchmarks have at least a train_stream
and a test_stream
;
A Stream
is a sequence of Experience
s. It can be a list or a generator;
An Experience
contains all the information available at a certain time t
;
AvalancheDataset
is a wrapper of PyTorch datasets. It provides functionalities used by the training module, such as concatenation, subsampling, and management of augmentations.
The bechmarks
module offers:
Datasets: Pytorch datasets are wrapped in an AvalancheDataset
to provide additional functionality.
Classic Benchmarks: classic benchmarks used in CL litterature ready to be used with great flexibility.
Benchmarks Generators: a set of functions you can use to create your own benchmark and streams starting from any kind of data and scenario, such as class-incremental or task-incremental streams.
But let's see how we can use this module in practice!
Let's start with the Datasets
. When using Avalanche, your code will manipulate AvalancheDataset
s. It is a wrapper compatible with pytorch and torchvision map-style datasets.
In this example we created a classification dataset. Avalanche expects an attribute targets
for classification dataset, which is provided by MNIST and most classification datasets. Avalanche provides concatenation and subsampling, which also keep the dataset attributes consistent.
Most benchmarks will provide two streams: the train_stream
and test_stream
. Often, these are two parallel streams of the same length, where each experience is sampled from the same distribution (e.g. same set of classes). Some benchmarks may have a single test experience with the whole test dataset.
Experiences provide all the information needed to update the model, such as the new batch of data, and they may be decorated with attributes that are helpful for training or logging purposes. Long streams can be generated on-the-fly to reduce memory requirements and avoiding long preprocessing time during the benchmark creation step.
We will use SplitMNIST
, a popular CL benchmark which is the class-incremental version of MNIST
.
The most basic way to create a benchmark is to use the benchmark_from_datasets
method. It takes a list of datasets for each stream and returns a benchmark with the specified streams.
we can also split a validation stream from the training stream
The Continual Learning nomenclature is overloaded and quite confusing. Avalanche has its own nomenclature to provide consistent naming across the library. For example:
Task-awareness: a model is task-aware if it requires task labels. Avalanche benchmarks can have task labels to support this use case;
Online: online streams are streams with small experiences (e.g. 10 samples). They look exactly like their "large batches" counterpart, except for the fact that len(experience.dataset)
is small;
Boundary-awareness: a model is boundary-aware if it requires boundary labels. Boundary-free models are also called task-free in the literature (there is not accepted nomenclature for "boundary-aware" models). We don't use this nomenclature because task and boundaries are different concepts in Avalanche. Avalanche benchmarks can have boundary labels to support this use case. Even for boundary-free models, Avalanche benchmarks can provide boundary labels to support evaluation metrics that require them;
Classification: classification is the most common CL setting. Avalanche adds class labels to experience to simplify the code of the user. Similarly, Avalanche datasets keep track of targets
to support this use case.
Avalanche experiences can be decorated with different attributes depending on the specific setting. Classic benchmarks already provide the attributes you need. We will see some examples of attributes and generators in the remaining part of this tutorial.
One general aspects of experience attributes is that they may not always be available. Sometimes, a model can use task labels during training but not at evaluation time. Other times, the model should never use task lavels but you may still need them for evaluation purposes (to compute task-aware metrics). Avalanche experience have different modalities:
training mode
evaluation mode
logging mode
Each modality can provide access or mask some of the experience attributes. This mechanism allows you to easily add private attributes to the experience for logging purposes while ensuring that the model will not cheat by using that information.
classification benchmarks follow the ClassesTimeline
protocol and provide attributes about the classes in the stream.
task-aware benchmarks add task labels, following the TaskAware
protocol.
To define online streams we need two things:
a mechanism to split a larger stream
attribute that indicate the boundaries (if necessary)
This is how you do it in Avalanche:
This completes the "Benchmark" tutorial for the "From Zero to Hero" series. We hope you enjoyed it!
The last step towards becoming a real continual learning super-hero ⚡ is to fall into a radioactive dump.☢️ Just kidding, it's much easier than that: you need to contribute back to Avalanche!
There are no superheroes that are not altruistic!
In order to contribute to Avalanche, first of all you need to become familiar with all its features and the codebase structure, so if you have not followed the "From Zero to Hero Tutorial" from the beginning we suggest to do it before starting to make changes.
First of all, if you haven't already. After you've familiarized with the Avalanche codebase you have two roads ahead of you:
You can start working on a (we have dozen of them!)
You can and propose yourself to work on it.
In any of the two cases you'd need to follow the steps below:
⭐_Star_ + 👁️_watch_ the repository.
Fork the repository.
Create or assign an existing issue/feature to yourself.
Make your changes.
The following rules should be respected:
Always pull before pushing a commit.
Try to assign to yourself one issue at a time.
Try closing an issue within roughly 7 days. If you are not able to do that, please break it down into multiple ones you can tackle more easily, or you can always remove your assignment to the issue!
If you add a new feature, please include also a test and a usage example in your PR.
Also, before making your PR, first run the following command for code formatting with Black:
then, make sure that the following command returns no test errors:
Otherwise fix them and run these commands again until everything is working correctly. You should also check if everything is working on GPUs, using the env variable USE_GPU=True
:
Faster integrity checks can be run with the env variable FAST_TEST=True
:
Contribute to the Avalanche documentation
To contribute to the documentation you need to follow the steps below:
The notebooks are contained in the folder notebooks
. The folder structure is specular to the documentation, so do not create or delete any folder.
Detect the notebook that you want to edit and do all the modifications 📝
Commit the changes and open a pull request (PR).
If your pull request will be accepted, your edited notebooks will be automatically converted and uploaded to the official Avalanche website 🎊!
Avalnche Features: Benchmarks, Strategies & Metrics
Avalanche is a framework in constant development. Thanks to the support of the community and its active members we plan to extend its features and improve its usability based on the demands of our research community! A the moment, Avalanche is in Beta (v0.3.1). We support a large number of Benchmarks, Strategies and Metrics, that makes it, we believe, the best tool out there for your continual learning research! 💪
You can find the full list of available features on the .
Do you think we are missing some important features? Please ! We deeply value !
You find a complete list of the features on the .
Avalanche supports all the most popular computer vision datasets used in Continual Learning. Some of them are available in , while other have been integrated by us. Most datasets can be automatically downloaded by Avalanche.
Toy datasets: MNIST, Fashion MNIST, KMNIST, EMNIST, QMNIST.
CIFAR: CIFAR10, CIFAR100.
ImageNet: TinyImagenet, MiniImagenet, Imagenet.
Others: EndlessCLDataset, CUB200, OpenLORIS, Stream-51, INATURALIST2018, Omniglot, CLEARImage, ...
All the major continual learning benchmarks are available and ready to use. Benchmarks split the datasets and create the train and test streams:
MNIST: SplitMNIST, RotatedMNIST, PermutedMNIST, SplitFashionMNIST.
CIFAR10: SplitCIFAR10, SplitCIFAR100, SplitCIFAR110.
CORe50: all the CORe50 scenarios are supported.
Others: SplitCUB200, CLStream51, CLEAR.
Baselines: Naive, JointTraining, Cumulative.
Rehearsal: Replay with reservoir sampling and balanced buffers, GSS greedy, CoPE, Generative Replay.
Regularization: EWC, LwF, GEM, AGEM, CWR*, Synaptic Intelligence, MAS.
Architectural: Progressive Neural Networks, multi-head, incremental classifier.
Others: GDumb, iCaRL, AR1, Streaming LDA, LFL.
Avalanche uses and extends pytorch nn.Module
to define continual learning models:
support for nn.Module
s and torchvision
models.
Dynamic output heads for class-incremental scenarios and multi heads for task-incremental scenarios.
support for architectural strategies and dynamically expanding models such as progressive neural networks.
Avalanche provides continuous evaluation of CL strategies with a large set of Metrics. They are collected and logged automatically by the strategy during the training and evaluation loops.
Standard Performance Metrics: accuracy, loss, confusion (averaged over streams or experiences).
CL-Metrics: backward/forward transfer, forgetting.
Computational Resources: CPU and RAM usage, MAC, execution times.
Dealing with transformations (groups, appending, replacing, freezing).
While torchvision (and other) datasets typically have a fixed set of transformations, AvalancheDataset also provides some additional functionalities. AvalancheDataset
s can:
Have multiple transformation "groups" in the same dataset (like separate train and eval transformations).
Manipulate transformation by freezing, replacing and removing them.
The following sub-sections show examples on how to use these features. It is warmly recommended to run this page as a notebook using Colab (info at the bottom of this page).
Let's start by installing Avalanche:
AvalancheDatasets can contain multiple transformation groups. This can be useful to keep train and test transformations in the same dataset and to have different sets of transformations. For instance, you can easily add ad-hoc transformations to using for replay data.
For classification dataset, we follow torchvision conventions. Therefore, make_classification_dataset
supports transform
, which is applied to input (X) values, and target_transform
, which is applied to class labels (Y). The latter is rarely used. This means that a transformation group is a pair of transformations to be applied to the X and Y values of each instance returned by the dataset. In both torchvision and Avalanche implementations, a transformation must be a function (or other callable object) that accepts one input (the X or Y value) and outputs its transformed version. A comprehensive guide on transformations can be found in the .
In the following example, a MNIST dataset is created and then wrapped in an AvalancheDataset. When creating the AvalancheDataset, we can set train and eval transformations by passing a transform_groups parameter. Train transformations usually include some form of random augmentation, while eval transformations usually include a sequence of deterministic transformations only. Here we define the sequence of train transformations as a random rotation followed by the ToTensor operation. The eval transformations only include the ToTensor operation.
Of course, one can also just use the transform
and target_transform
constructor parameters to set the transformations for both the train and the eval groups. However, it is recommended to use the approach based on transform_groups (shown in the code above) as it is much more flexible.
.train()
and .eval()
The default behaviour of the AvalancheDataset is to use transformations from the train group. However, one can easily obtain a version of the dataset where the eval group is used. Note: when obtaining the dataset of experiences from the test stream, those datasets will already be using the eval group of transformations so you don't need to switch to the eval group ;).
You can switch between the train and eval groups using the .train()
and .eval()
methods to obtain a copy (view) of the dataset with the proper transformations enabled. As a general rule, methods that manipulate the AvalancheDataset fields (and transformations) always create a view of the dataset. The original dataset is never changed.
In the following cell we use the avl_mnist_transform dataset created in the cells above. We first obtain a view of it in which eval transformations are enabled. Then, starting from this view, we obtain a version of it in which train transformations are enabled. We want to double-stress that .train()
and .eval()
never change the group of the dataset on which they are called: they always create a view.
One can check that the correct transformation group is in use by looking at the content of the transform/target_transform fields.
In AvalancheDatasets the train and eval transformation groups are always available. However, AvalancheDataset also supports custom transformation groups.
The following example shows how to create an AvalancheDataset with an additional group named replay. We define the replay transformation as a random crop followed by the ToTensor operation.
However, once created the dataset will use the train group. You can switch to the group using the .with_transforms(group_name)
method. The .with_transforms(group_name)
method behaves in the same way .train()
and .eval()
do by creating a view of the original dataset.
The replacement operation follows the same idea (and benefits) of the append one. By using .replace_current_transform_group(transform, target_transform)
one can obtain a view of the original dataset in which the transformaations for the current group are replaced with the given ones. One may also change tranformations for other groups by passing the name of the group as the optional parameter group
. As with any transform-related operation, the original dataset is not affected.
Note: one can use .replace_transforms(...)
to remove previous transformations (by passing None
as the new transform).
The following cell shows how to use .replace_transforms(...)
to replace the transformations of the current group:
One last functionality regarding transformations is the ability to "freeze" transformations. Freezing transformations menas permanently glueing transformations to the dataset so that they can't be replaced or changed in any way (usually by mistake). Frozen transformations cannot be changed by using .replace_transforms(...)
.
One may wonder when this may come in handy... in fact, you will probably rarely need to freeze transformations. However, imagine having to instantiate the PermutedMNIST benchmark. You want the permutation transformation to not be changed by mistake. However, the end users do not know how the internal implementations of the benchmark works, so they may end up messing with those transformations. By freezing the permutation transformation, users cannot mess with it.
Transformations for all transform groups can be frozen at once by using .freeze_transforms()
. As always, those methods return a view of the original dataset.
In this way, that transform can't be removed. However, remember that one can always append other transforms atop of frozen transforms.
The cell below shows that replace_transforms
can't remove frozen transformations:
This completes the Mini How-To for the functionalities of the AvalancheDataset related to transformations.
Here you learned how to use transformation groups and how to append/replace/freeze transformations in a simple way.
Logging... logging everywhere! 🔮
Welcome to the "Logging" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the Avalanche logging
module.
In the previous tutorial we have learned how to evaluate a continual learning algorithm in Avalanche, through different metrics that can be used off-the-shelf via the Evaluation Plugin or stand-alone. However, computing metrics and collecting results, may not be enough at times.
While running complex experiments with long waiting times, logging results over-time is fundamental to "babysit" your experiments in real-time, or even understand what went wrong in the aftermath.
This is why in Avalanche we decided to put a strong emphasis on logging and provide a number of loggers that can be used with any set of metrics!
Avalanche at the moment supports four main Loggers:
InteractiveLogger: This logger provides a nice progress bar and displays real-time metrics results in an interactive way (meant for stdout
).
TextLogger: This logger, mostly intended for file logging, is the plain text version of the InteractiveLogger
. Keep in mind that it may be very verbose.
TensorboardLogger: It logs all the metrics on in real-time. Perfect for real-time plotting.
WandBLogger: It leverages tools to log metrics and results on a dashboard. It requires a W&B account.
In order to keep track of when each metric value has been logged, we leverage two global counters
, one for the training phase, one for the evaluation phase. You can see the global counter
value reported in the x axis of the logged plots.
Each global counter
is an ever-increasing value which starts from 0 and it is increased by one each time a training/evaluation iteration is performed (i.e. after each training/evaluation minibatch). The global counters
are updated automatically by the strategy.
If the available loggers are not sufficient to suit your needs, you can always implement a custom logger by specializing the behaviors of the StrategyLogger
base class.
This completes the "Logging" tutorial for the "From Zero to Hero" series. We hope you enjoyed it!
Examples for the Models module offered in Avalanche
Avalanche offers basic support for defining your own models or adapt existing PyTorch models with a particular emphasis on model adaptation over time.
You can find examples related to the models here:
: This example shows how to train models provided by pytorchcv with the rehearsal strategy.
: This example trains a Multi-head model on Split MNIST with Elastich Weight Consolidation. Each experience has a different task label, which is used at test time to select the appropriate head.
Avalanche is mostly about making the life of a continual learning researcher easier.
Below, you can see the main Avalanche modules and how they interact with each other.
What are the three pillars of any respectful continual learning research project?
Benchmarks
: Machine learning researchers need multiple benchmarks with efficient data handling utils to design and prototype new algorithms. Quantitative results on ever-changing benchmarks has been one of the driving forces of Deep Learning.
Training
: Efficient implementation and training of continual learning algorithms; comparisons with other baselines and state-of-the-art methods become fundamental to asses the quality of an original algorithmic proposal.
Evaluation
: Training utils and Benchmarks are not enough alone to push continual learning research forward. Comprehensive and sound evaluation protocols and metrics need to be employed as well.
With Avalanche, you can find all these three fundamental pieces together and much more, in a single and coherent, well-maintained codebase.
Let's take a quick tour on how you can use Avalanche for your research projects with a 5-minutes guide, for researchers on the run!
Avalanche is organized in five main modules:
Training: This module provides all the necessary utilities concerning model training. This includes simple and efficient ways of implement new continual learning strategies as well as a set pre-implemented CL baselines and state-of-the-art algorithms you will be able to use for comparison!
Evaluation: This modules provides all the utilities and metrics that can help in evaluating a CL algorithm with respect to all the factors we believe to be important for a continually learning system.
In the graphic below, you can see how Avalanche sub-modules are available and organized as well:
We will learn more about each of them during this tutorial series, but keep in mind that the [Avalanche API documentation](https://avalanche-api.continualai.org/en/latest/) is your friend as well!
All right, let's start with the benchmarks module right away 👇
The benchmark module offers three main features:
Datasets: a comprehensive list of PyTorch Datasets ready to use (It includes all the Torchvision Datasets and more!).
Classic Benchmarks: a set of classic Continual Learning Benchmarks ready to be used (there can be multiple benchmarks based on a single dataset).
Generators: a set of functions you can use to generate your own benchmark starting from any PyTorch Dataset!
Datasets can be imported in Avalanche as simply as:
Of course, you can use them as you would use any PyTorch Dataset.
The Avalanche benchmarks (instances of the Scenario class), contains several attributes that describe the benchmark. However, the most important ones are the train
and test
streams.
In Avalanche we often suppose to have access to these two parallel stream of data (even though some benchmarks may not provide such feature, but contain just a unique test set).
Each of these streams
are iterable, indexable and sliceable objects that are composed of experiences. Experiences are batch of data (or "tasks") that can be provided with or without a specific task label.
Avalanche maintains a set of commonly used benchmarks built on top of one or multiple datasets.
What if we want to create a new benchmark that is not present in the "Classic" ones? Well, in that case Avalanche offers a number of utilities that you can use to create your own benchmark with maximum flexibility: the benchmark generators!
The specific scenario generators are useful when starting from one or multiple PyTorch datasets and you want to create a "New Instances" or "New Classes" benchmark: i.e. it supports the easy and flexible creation of a Domain-Incremental, Class-Incremental or Task-Incremental scenarios among others.
Finally, if your ideal benchmark does not fit well in the aforementioned Domain-Incremental, Class-Incremental or Task-Incremental scenarios, you can always use our generic generators:
filelist_benchmark
paths_benchmark
dataset_benchmark
tensors_benchmark
You can read more about how to use them the full Benchmarks module tutorial!
The training
module in Avalanche is build on modularity and it has two main goals:
provide a set of standard continual learning baselines that can be easily run for comparison;
provide the necessary utilities to implement and run your own strategy in the most efficient and simple way possible thanks to the building blocks we already prepared for you.
If you want to compare your strategy with other classic continual learning algorithms or baselines, in Avalanche this is as simple as creating an object:
The simplest way to build your own strategy is to create a python class that implements the main train
and eval
methods.
Let's define our Continual Learning algorithm "MyStrategy" as a simple python class:
Then, we can use our strategy as we would do for the pre-implemented ones:
While this is the easiest possible way to add your own strategy, Avalanche supports more sophisticated modalities (based on callbacks) that lets you write more neat, modular and reusable code, inheriting functionality from a parent classes and using pre-implemented plugins.
Check out more details about what Avalanche can offer in this module following the "Training" chapter of the "From Zero to Hero" tutorial!
The evaluation
module is quite straightforward: it offers all the basic functionalities to evaluate and keep track of a continual learning experiment.
This is mostly done through the Metrics and the Loggers. The Metrics provide a set of classes which implements the main continual learning metrics like Accuracy, Forgetting, Memory Usage, Running Times, etc.
Metrics should be created via the utility functions (e.g. accuracy_metrics
, timing_metrics
and others) specifying in the arguments when those metrics should be computed (after each minibatch, epoch, experience etc...).
The Loggers specify a way to report the metrics (e.g. with Tensorboard, on console or others). Loggers are created by instantiating the respective class.
Metrics and loggers interact via the Evaluation Plugin: this is the main object responsible of tracking the experiment progress. Metrics and loggers are directly passed to the EvaluationPlugin
instance. You will see the output of the loggers automatically during training and evaluation! Let's see how to put this together in few lines of code:
For more details about the evaluation module (how to write new metrics/loggers, a deeper tutorial on metrics) check out the extended guide in the "Evaluation" chapter of the "From Zero to Hero" Avalanche tutorial!
You've learned how to install Avalanche, how to create benchmarks that can suit your needs, how you can create your own continual learning algorithm and how you can evaluate its performance.
Here we show how you can use all these modules together to design your experiments as quantitative supporting evidence for your research project or paper.
Avalanche provides several components that help you to balance data loading and implement rehearsal strategies.
Dataloaders are used to provide balancing between groups (e.g. tasks/classes/experiences). This is especially useful when you have unbalanced data.
Buffers are used to store data from the previous experiences. They are dynamic datasets with a fixed maximum size, and they can be updated with new data continuously.
Finally, Replay strategies implement rehearsal by using Avalanche's plugin system. Most rehearsal strategies use a custom dataloader to balance the buffer with the current experience and a buffer that is updated for each experience.
First, let's install Avalanche. You can skip this step if you have installed it already.
Avalanche dataloaders are simple iterators, located under avalanche.benchmarks.utils.data_loader
. Their interface is equivalent to pytorch's dataloaders. For example, GroupBalancedDataLoader
takes a sequence of datasets and iterates over them by providing balanced mini-batches, where the number of samples is split equally among groups. Internally, it instantiate a DataLoader
for each separate group. More specialized dataloaders exist such as TaskBalancedDataLoader
.
All the dataloaders accept keyword arguments (**kwargs
) that are passed directly to the dataloaders for each group.
Memory buffers store data up to a maximum capacity, and they implement policies to select which data to store and which the to remove when the buffer is full. They are available in the module avalanche.training.storage_policy
. The base class is the ExemplarsBuffer
, which implements two methods:
update(strategy)
- given the strategy's state it updates the buffer (using the data in strategy.experience.dataset
).
resize(strategy, new_size)
- updates the maximum size and updates the buffer accordingly.
The data can be access using the attribute buffer
.
At first, the buffer is empty. We can update it with data from a new experience.
Notice that we use a SimpleNamespace
because we want to use the buffer standalone, without instantiating an Avalanche strategy. Reservoir sampling requires only the experience
from the strategy's state.
Notice after each update some samples are substituted with new data. Reservoir sampling select these samples randomly.
Avalanche offers many more storage policies. For example, ParametricBuffer
is a buffer split into several groups according to the groupby
parameters (None
, 'class', 'task', 'experience'), and according to an optional ExemplarsSelectionStrategy
(random selection is the default choice).
The advantage of using grouping buffers is that you get a balanced rehearsal buffer. You can even access the groups separately with the buffer_groups
attribute. Combined with balanced dataloaders, you can ensure that the mini-batches stay balanced during training.
Avalanche's strategy plugins can be used to update the rehearsal buffer and set the dataloader. This allows to easily implement replay strategies:
And of course, we can use the plugin to train our continual model
Baselines and Strategies Code Examples
Avalanche offers significant support for training (with templates, strategies and plug-ins). Here you can find a list of examples related to the training and some strategies available in Avalanche (each strategy reproduces original paper results in the repository:
: this example shows how to take a stream of experiences and train simultaneously on all of them. This is useful to implement the "offline" or "multi-task" upper bound.
: t_his is a simple example on how to use the AR1 strategy._
: h_ow to define your own cumulative strategy based on the different Data Loaders made available in Avalanche._
: this example shows how to use early stopping to dynamically stop the training procedure when the model converged instead of training for a fixed number of epochs.
: this example shows how to run object detection/segmentation tasks.
: this example shows how to run object detection/segmentation tasks with a toy benchmark based on the LVIS dataset.
: set of examples showing how you can use Avalanche for distributed training of object detector.
: this example shows how to create a stream of pre-trained model from which to learn.
: this is a simple example on how to implement generative replay in Avalanche.
: example to run a naive strategy in an online setting.
: sequence classification example using torchaudio and Speech Commands.
Dealing with AvalancheDatasets
The AvalancheDataset
is an implementation of the PyTorch Dataset
class that comes with many useful out-of-the-box functionalities. For most users, the AvalancheDataset can be used as a plain PyTorch Dataset. For classification problems, AvalancheDataset
return x, y, t
elements (input, target, task label). However, the AvalancheDataset
can be easily extended for any custom needs.
A serie of Mini How-Tos will guide you through the functionalities of the AvalancheDataset and its subclasses:
Make it Custom, Make it Yours
Having learned how to use all the Avalanche main features, you may end up willing to customize the framework a little to suit your eagerness for continually better functionalities (as a true continual learner would indeed do! ⚡).
Hence, now is the time to get your hands dirty! 🙌
Take you time to explore the in great detail. We made sure everything is well documented (even if improvable), but try to take a look at the code as well to resolve any uncertainties (of course if you have any questions )
You can start by .
We suggest delving into the code using an appropriate IDE, such as . This will help you navigate the code better and with tons of cool discovery features. Once you have a clear understanding of the entire codebase (or at least the module you'd like to extend/customize) you can start making changes.
If you think your changes may be interesting for the rest of the Continual Learning community, why not contribute back to Avalanche? You can learn how to do it in the next chapter.
You can run this chapter and play with it on Google Colaboratory:
Benchmarks and DatasetCode Examples
Avalanche offers significant support for defining your own benchmarks (instantiation of one scenario with one or multiple datasets) or using "classic" benchmarks already consolidate in the literature.
You can find examples related to the benchmarks here:
: in this simple example we show all the different ways you can use MNIST with Avalanche.
__: training and evaluating on CLEAR benchmark (RGB images)
: Training and evaluating on CLEAR benchmark (with pre-trained features)
: about the utils you can use create a detection benchmark.
: this example makes use of the Endless-Continual-Learning-Simulator's derived dataset scenario.
: In this example we show a simple way to use the ctrl benchmark.
: how to use Hugging Face models and datasets within Avalanche for Natural Language Processing.
Save and load checkpoints
The ability to save and resume experiments may be very useful when running long experiments. Avalanche offers a checkpointing functionality that can be used to save and restore your strategy including plugins, metrics, and loggers.
This guide will show how to plug the checkpointing functionality into the usual Avalanche main script. This only requires minor changes in the main: no changes on the strategy/plugins/... code is required! Also, make sure to check the example in the repository for a ready-to-go template.
Resuming a continual learning experiment is not the same as resuming a classic deep learning training session. In classic training setups, the elements needed to resume an experiment are limited to i) the model weights, ii) the optimizer state, and iii) additional info such as the number of epochs/iterations so far. On the contrary, continual learning experiments need far more info to be correctly resumed:
The state of plugins, such as:
the examples saved in the replay buffer
the importance of model weights (EwC, Synaptic Intelligence)
a copy of the model (LwF)
... any many others, which are specific to each technique!
The state of metrics, as some are computed on the performance measured on previous experiences:
AMCA (Average Mean Class Accuracy) metric
Forgetting metric
To handle all these elements, we opted to provide an easy-to-use plugin: the CheckpointPlugin. It will take care of loading:
Strategy, including the model
Plugins
Metrics
Loggers: this includes re-opening the logs for TensoBoard, Weights & Biases, ...
State of all random number generators
In continual learning experiments, this affects the choice of replay examples and other critical elements. This is usually not needed in classic deep learning, but here may be useful!
Here, in a couple of cells, we'll show you how to use it. Remember that you can follow this guide by running it as a notebook (see below for a direct link to load it on Colab).
Let's install Avalanche:
And let us import the needed elements:
Let's proceed by defining a very vanilla Avalanche main script. Simply put, this usually comes down to defining:
Load any configuration, set seeds, etcetera
The benchmark
The model, optimizer, and loss function
Evaluation components
The list of metrics to track
The loggers
The evaluation plugin (that glues the metrics and loggers together)
The training plugins
The strategy
The train-eval loop
They do not have to be in this particular order, but this is the order followed in this guide.
To enable checkpointing, the following changes are needed:
In the very first part of the code, fix the seeds for reproducibility
The RNGManager class is used, which may be useful even in experiments in which checkpointing is not needed ;)
Instantiate the checkpointing plugin
Check if a checkpoint exists and load it
Only if not resuming from a checkpoint: create the Evaluation components, the plugins, and the strategy
Change the train/eval loop to start from the experience
Let's start with the first change: defining a fixed seed. This is needed to correctly re-create the benchmark object and should be the same seed used to create the checkpoint.
The RNGManager takes care of setting the seed for the following generators: Python random, NumPy, and PyTorch (both cpu and device-specific generators). In this way, you can be sure that any randomness-dependent elements in the benchmark creation procedure are identical across save/resume operations.
Let's then proceed with the usual Avalanche code. Note: nothing to change here to enable checkpointing. Here we create a SplitMNIST benchmark and instantiate a multi-task MLP model. Notice that checkpointing works fine with multi-task models wrapped using as_multitask
.
It's now time to instantiate the checkpointing plugin and load the checkpoint.
Please notice the arguments passed to the CheckpointPlugin constructor:
The first parameter is a storage object. We decided to allow the checkpointing plugin to load checkpoints from arbitrary storages. The simpler storage, FileSystemCheckpointStorage
, will use a given directory to store the file for the current experiment (do not point multiple experiments/runs to the same directory!). However, we plan to expand this in the future to support network/cloud storages. Contributions on this are welcome :-)! Remember that the CheckpointStorage
interface is quite simple to implement in a way that best fits your needs.
The device used for training. This functionality may be particularly useful in some cases: the plugin will take care of loading the checkpoint on the correct device, even if the checkpoint was created on a cuda device with a different id. This means that it can also be used to resume a CUDA checkpoint on CPU. The only caveat is that it cannot be used to load a CPU checkpoint to CUDA. In brief: CUDA -> CPU (OK), CUDA:0 -> CUDA:1 (OK), CPU -> CUDA (NO!). This will also take care of updating the device field of the strategy (and plugins) to point to the current device.
The next change is in fact quite minimal. It only requires wrapping the creation of plugins, metrics, and loggers in an "if" that checks if a checkpoint was actually loaded. If a checkpoint is loaded, the resumed strategy already contains the properly restored plugins, metrics, and loggers: it would be an error to create them.
Final change: adapt the for loop so that the training stream is iterated starting from initial_exp
. This variable was created when loading the checkpoint and it tells the next training experience to run. If no checkpoint was found, then its value will be 0.
A new checkpoint is stored at the end of each eval phase! If the program is interrupted before, all progress from the previous eval phase is lost.
Here exit_early
is a simple placeholder that you can use to experiment a bit. You may obtain a similar effect by stopping this notebook manually, restarting the kernel, and re-running all cells. You will notice that the last checkpoint will be loaded and training will resume as expected.
Usually, exit_early
should be implemented as a mechanism able to gracefully stop the process. When using SLURM or other schedulers (or even when terminating processes using Ctrl-C), you can catch termination signals and manage them properly so that the process exits after the next eval phase. However, don't worry if the process is killed abruptly: the last checkpoint will be loaded correctly once the experiment is restarted by the scheduler.
A variation of the standard Dataset
exist in PyTorch: the . When using an IterableDataset
, one can load the data points in a sequential way only (by using a tape-alike approach). The dataset[idx]
syntax and len(dataset)
function are not allowed. Avalanche does NOT support IterableDataset
s. You shouldn't worry about this because, realistically, you will never encounter such datasets (at least in torchvision). If you need IterableDataset
let us know and we will consider adding support for them.
most of the time, you can also use one of the utility function in that also add attributes such as class and task labels to the dataset. For example, you can create a classification dataset using make_classification_dataset
.
Avalanche provides some to sample in a task-balanced way or to balance the replay buffer and current data, but you can also use the standard pytorch DataLoader
.
AvalancheDataset
implements a very rich and powerful set of functionalities for managing transformation. You can learn more about it in the .
Please refer to the for a complete list. It is recommended to start with the "Creating AvalancheDatasets" Mini How-To.
You can run this chapter and play with it on Google Colaboratory by clicking here:
You can run this chapter and play with it on Google Colaboratory:
and #avalanche-dev channel (optional but recommended)
Make a (PR).
Use code formatting for a consistent coding style, which also handles line lengths (the 88 columns limit) automatically.
Apart from the code, you can also contribute to the Avalanche documentation 📚! We use to write the documentation, so both code and text can be smoothly inserted, and, as you may have noticed, all our documentation can be run on !
You can run this chapter and play with it on Google Colaboratory:
Avalanche provides Continual Learning algorithms (strategies). We are continuously expanding the library with new algorithms and making sure they can reproduce seminal papers results in the sibling project .
and .
The cell below shows a simplified excerpt from the . First, a PixelsPermutation instance is created. That instance is a transformation that will permute the pixels of the input image. We then create the train end test sets. Once created, transformations for those datasets are frozen using .freeze_transforms()
.
Other Mini How-Tos will guide you through the other functionalities offered by the AvalancheDataset class. The list of Mini How-Tos can be found .
You can run this chapter and play with it on Google Colaboratory by clicking here:
You can run this chapter and play with it on Google Colaboratory:
Let's first install Avalanche. Please, check out our guide for further details.
Benchmarks: This module maintains a uniform API for data handling: mostly generating a stream of data from one or more datasets. It contains all the major CL benchmarks (similar to what has been done for ).
Models: In this module you'll be able to find several model architectures and pre-trained models that can be used for your continual learning experiment (similar to what has been done in ).
Logging: It includes advanced logging and plotting features, including native stdout, file and support (How cool it is to have a complete, interactive dashboard, tracking your experiment metrics in real-time with a single line of code?)
You can run this chapter and play with it on Google Colaboratory:
Note that those changes are all properly annotated in the example, which is the recommended template to follow when enabling checkpoint in a training script.
That's it for the checkpointing functionality! This is relatively new mechanism and feedbacks on this are warmly welcomed in our in the repository!
You can run this guide and play with it on Google Colaboratory by clicking here:
Frequently Asked Questions
In this page we answer frequently asked questions about the library. We know these to be mostly pain points we need to address as soon as possible in the form of better features o better documentation.
How can I create a stream of experiences based on my own data?
You can use the Benchmark Generators: such utils in Avalanche allows you to build a stream of experiences based on an AvalancheDataset (or PyTorchDataset), or directly from PyTorch tensors, paths or filelists.
Why some Avalanche strategies do not work on my dataset?
We cannot guarantee each strategy implemented in Avalanche will work in any possible setting. A continual learning algorithm implementation is accepted in Avalanche if it can reproduce at least a portion of the original paper results. In the CL-Baseline project we make sure reproducibility is maintained for those with every main avalanche release.
Protocols and Metrics Code Examples
Avalanche offers significant support for defining your own eveluation protocol (classic or custom metrics, when and on what to test). You can find examples related to the benchmarks here:
Eval Plugin: this is a simple example on how to use the Evaluation Plugin (the evaluation controller object)
Standalone Metrics: how to use metrics as standalone objects.
Confusion Matrix: this example shows how to produce confusion matrix during training and evaluation.
Dataset Inspection: this is a simple example on how to use the Dataset inspection plugins.
Mean Score: example usage of the mean_score helper to show the scores of the true class, averaged by new and old classes.
Task Metrics: this is a simple example on how to use the Evaluation Plugin with metrics returning values for different tasks.
Examples for the Loggers module offered in Avalanche
Avalanche offers concrete support for using standard logger like csv file, TensorBoard, etc. or even defining your own loggers. You can find examples related to the benchmarks here:
Tensorboard logger: this is a simple example that shows how to use the Tensorboard Logger.
WandB logger: This is a simple example that shows how to use the WandB Logger.
Understand the Avalanche Package Structure
Welcome to the "Introduction" tutorial of the "From Zero to Hero" series. We will start our journey by taking a quick look at the Avalanche main modules to understand its general architecture.
As hinted in the getting started introduction Avalanche is organized in five main modules:
Benchmarks
: This module maintains a uniform API for data handling: mostly generating a stream of data from one or more datasets. It contains all the major CL benchmarks (similar to what has been done for torchvision).
Training
: This module provides all the necessary utilities concerning model training. This includes simple and efficient ways of implement new continual learning strategies as well as a set pre-implemented CL baselines and state-of-the-art algorithms you will be able to use for comparison!
Evaluation
: This module provides all the utilities and metrics that can help evaluate a CL algorithm with respect to all the factors we believe to be important for a continually learning system. It also includes advanced logging and plotting features, including native Tensorboard support.
Models
: In this module you'll find several model architectures and pre-trained models that can be used for your continual learning experiment (similar to what has been done in torchvision.models). Furthermore, we provide everything you need to implement architectural strategies, task-aware models, and dynamic model expansion.
Logging
: It includes advanced logging and plotting features, including native stdout, file and Tensorboard support (How cool it is to have a complete, interactive dashboard, tracking your experiment metrics in real-time with a single line of code?)
In this series of tutorials, you'll get the chance to learn in-depth all the features offered by each module and sub-module of Avalanche, how to put them together and how to master Avalanche, for a stress-free continual learning prototyping experience!
In the following tutorials we will assume you have already installed Avalanche on your computer or server. If you haven't yet, check out how you can do it following our How to Install guide.
Installing Avalanche has Never Been so Simple
Avalanche has been designed for extreme portability and usability. Indeed, it can be run on every OS and native python environment. 💻🍎🐧
you can install Avalanche with pip:
This will install the core version of Avalanche, without extra packages (e.g., object detection support, reinforcement learning support). To install all the extra packages run:
You can install also specific extra packages by specifying the appropriate code name within the square brackets. This is the complete list of options:
Avalanche will raise an error if you need one extra package and will suggest the appropriate package to install.
Note that in some alternatives to bash like zsh you may need to enclose `avalanche-lib[code]` into quotation marks ( " " ), since square brackets are used as special characters.
Warning: by installing the [all] and [extra] versions, the PyTorch version may be limited to <2.* due to the dependencies of those additional packages.
If you want, you can install Avalanche directly from the master branch (latest version) in a single command. Make sure to have pytorch already installed in your environment, then execute
To update avalanche to the latest version, uninstall the package with pip uninstall avalanche-lib
and then execute again the pip install command.
To help us to expand and improve Avalanche, you can install Avalanche in a fresh environment with the command
pip install -e ".[dev]"
This will install in editable mode, so that you can develop and modify the installed Avalanche package. It will also install the "extra" dev dependencies necessary to run tests and build the documentation.
You can run this chapter and play with it on Google Colaboratory:
For a Swift and Effective Contribution
If you are here it means you are considering contributing to Avalanche. It is thanks to people like you that we are making Avalanche a reality! 😍
In order to contribute the this awesome framework we recommend to go through the "From Zero to Hero" Avalanche Tutorial:
In this tutorial you'll learn Avalanche in-depth and learn how to extend and contribute back to the community! In particular, be sure to read the "Contribute to Avalanche" chapter:
At the moment, we don't have a lot of rules for contributing or a strict code of conduct, please enjoy this freedom with a grain of salt! 😁
We are all ears!
Avalanche is a tool from the continual learning research community and for the continual learning research community. We try to keep the design of Avalanche as open, collaborative and inclusive as possible. This is why we are always keen to hear your feedback about Avalanche! Join directly on slack (#avalanche channel) for a quick feedback or write a post on GitHub Discussions!
Happiness is only Real when Shared
Do you want to make Avalanche more suitable for your own research project? Or maybe you just want to learn more about it and sharpen your coding skills in this area?
No matter the reasons, we are always looking for new members that can help help us improve Avalanche and make it a better tool for everyone!
Building something great together 👪 is fun and fulfilling 🎈. Joining our team you will also join a family of mentors and friends that can let you collaborate, have fun and ultimately achieve more in this area.
No matter your research or coding expertise level you may have, we believe anyone has her own strengths that can help us build a wonderful tool, being passion and time the fundamental ingredients.
So, don't hesitate to contact our team to learn more about how you can help. Do it now! 😊
All the People that Made Avalanche Great
The Project is maintained mostly by ContinualAI Lab members, with the core mission of supporting the production, organization and dissemination of original research on CL with technical research, open source projects and tools that can make the life of a CL researcher easier.
Antonio Carta (Lead Mantainer)
Lorenzo Pellegrini (Mantainer)
Andrea Cossu (Mantainer)
Gabriele Graffieti (Mantainer)
Hamed Hemati (Mantainer)
Vincenzo Lomonaco (Project Manager)
Avalanche is a large community effort. It is only fair to list here all the people who made it a great tool that anyone can use without any restrictions at all!
Tyler Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Adrian Popescu, Andreas Tolias, Fabio Cuzzolin, Simone Scardapane, Simone Calderara, Subutai Amhad, Luca Antiga, Christopher Kanan, Joost van de Weijer, Tinne Tuytelaars, Davide Bacciu, German I. Parisi, Razvan Pascanu, Davide Maltoni ...see the full list on GitHub!
Avalanche is a great tool also thanks to its many users. Here we list some research groups using Avalanche for their continual learning research:
ContinualAI Lab (PI: Vincenzo Lomonaco)
Pervasive AI Lab (PI: Davide Bacciu)
BioLab (PI: Davide Maltoni, University of Bologna)
Computational Intelligence & Machine Learning Group (PI: Alessio Micheli, University of Pisa)
Italian Association for Machine Learning (President: Simone Scardapane, Sapienza University)
AIforPeople (President: Marta Ziosi, University of Oxford)
Learning and Machine Perception Team (PI: Joost van de Weijer)
Tinne Tuytelaars’ group (PI: Tinne Tuytelaars)
Machine and Neuromorphic Perception Laboratory (PI: Christopher Kanan)
LASTI Lab (PI: Adrian Popescu)
Visual Artificial Intelligence Laboratory (PI: Fabio Cuzzolin)
Eugenio Culurciello’s group (PI: Eugenio Culurciello)
If you want to contact us don't hesitate to send an email to vincenzo.lomonaco@continualai.org
, contact@continualai.org
, or you can join us on slack and chat with us all! 😃
Help us Design Avalanche of the Future
Do you think an important feature is missing in Avalanche? You are in the right place!
We try to keep the design of Avalanche as open, collaborative and inclusive as possible. This means discussing Avalanche issues, development and future ideas openly through general ContinualAI projects meetups, its slack channel, Github and forum.
If you'd like to add a new feature to Avalanche please let us know, so we can work on it, or team up with you to make it a happen! 😄
Features request can be opened on the appropriate GitHub Discussion Feature-Request section. Vote your preferred features and we will try to implement the most voted first!
Powered by ContinualAI
Avalanche is an End-to-End Continual Learning Library based on PyTorch, born within ContinualAI with the goal of providing a shared and collaborative open-source (MIT licensed) codebase for fast prototyping, training and reproducible evaluation of continual learning algorithms.
Looking for continual learning baselines? In the CL-Baseline sibling project based on Avalanche we reproduce seminal papers results you can directly use in your experiments!
Avalanche can help Continual Learning researchers and practitioners in several ways:
Write less code, prototype faster & reduce errors
Improve reproducibility, modularity and reusability
Increase code efficiency, scalability & portability
Augment impact and usability of your research products
The library is organized in five main modules:
Benchmarks
: This module maintains a uniform API for data handling: mostly generating a stream of data from one or more datasets. It contains all the major CL benchmarks (similar to what has been done for torchvision).
Training
: This module provides all the necessary utilities concerning model training. This includes simple and efficient ways of implement new continual learning strategies as well as a set pre-implemented CL baselines and state-of-the-art algorithms you will be able to use for comparison!
Evaluation
: This modules provides all the utilities and metrics that can help evaluate a CL algorithm with respect to all the factors we believe to be important for a continually learning system.
Models
: In this module you'll be able to find several model architectures and pre-trained models that can be used for your continual learning experiment (similar to what has been done in torchvision.models).
Logging
: It includes advanced logging and plotting features, including native stdout, file and TensorBoard support (How cool it is to have a complete, interactive dashboard, tracking your experiment metrics in real-time with a single line of code?)
Avalanche the first experiment of a End-to-end Library for reproducible continual learning research & development where you can find benchmarks, algorithms, evaluation metrics and much more, in the same place.
Let's make it together 👫 a wonderful ride! 🎈
Check out how your code changes when you start using Avalanche! 👇
We know that learning a new tool may be tough at first. This is why we made Avalanche as easy as possible to learn with a set of resources that will help you along the way.
For example, you may start with our 5-minutes guide that will let you acquire the basics about Avalanche and how you can use it in your research project:
We have also prepared for you a large set of examples & snippets you can plug-in directly into your code and play with:
Having completed these two sections, you will already feel with superpowers ⚡, this is why we have also created an in-depth tutorial that will cover all the aspect of Avalanche in details and make you a true Continual Learner! 👨🎓️
If you use Avalanche in your research project, please remember to cite our JMLR-MLOSS paper https://jmlr.org/papers/v24/23-0130.html. This will help us make Avalanche better known in the machine learning community, ultimately making a better tool for everyone:
you can also cite the previous CLVision @ CVPR2021 workshop paper: "Avalanche: an End-to-End Library for Continual Learning".
Avalanche is the flagship open-source collaborative project of ContinualAI: a non profit research organization and the largest open community on Continual Learning for AI.
Do you have a question, do you want to report an issue or simply ask for a new feature? Check out the Questions & Issues center. Do you want to improve Avalanche yourself? Follow these simple rules on How to Contribute.
The Avalanche project is maintained by the collaborative research team ContinualAI Lab and used extensively by the Units of the ContinualAI Research (CLAIR) consortium, a research network of the major continual learning stakeholders around the world.
We are always looking for new awesome members willing to join the ContinualAI Lab, so check out our official website if you want to learn more about us and our activities, or contact us.
Learn more about the Avalanche team and all the people who made it great!
To get Answers of Life, Ask Questions
We know that learning a new tool may be tough at times. This is why we are here to help you 🙏
Don't be afraid to ask questions, there are no stupid questions and we will always answer to you.
However, in order to help you, we need you to help us first. First of all, if the question is more of a code issue please use the page. For general questions, ideas, and discussions use .
If instead, this is a quick question about Avalanche or a request for support, in this case you can ask us directly (#avalanche channel). In any case, please make sure to follow the steps below:
Clarify your information needs.
Formulate them coherently.
Check if the same question or a related one can be found.
Ask your question.
Then, we will try to answer as swiftly as possible! 🤗
Help us Find Bug in Avalanche
If you encounter a problem in Avalanche, please do not give up on us and help us fix it as soon as possible. This first of all means reporting it. We are grateful to all the people who took the time to report an issue or even fix it with a Pull Request.
Check current Avalanche issue or submit a new one here:
Please try to use the appropriate tags and explain your issue with a simple code snippet to reproduce it following the bug report template.