Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Understand the Avalanche Package Structure
Welcome to the "Introduction" tutorial of the "From Zero to Hero" series. We will start our journey by taking a quick look at the Avalanche main modules to understand its general architecture.
As hinted in the getting started introduction Avalanche is organized in five main modules:
Benchmarks
: This module maintains a uniform API for data handling: mostly generating a stream of data from one or more datasets. It contains all the major CL benchmarks (similar to what has been done for torchvision).
Training
: This module provides all the necessary utilities concerning model training. This includes simple and efficient ways of implement new continual learning strategies as well as a set pre-implemented CL baselines and state-of-the-art algorithms you will be able to use for comparison!
Evaluation
: This module provides all the utilities and metrics that can help evaluate a CL algorithm with respect to all the factors we believe to be important for a continually learning system. It also includes advanced logging and plotting features, including native Tensorboard support.
Models
: In this module you'll find several model architectures and pre-trained models that can be used for your continual learning experiment (similar to what has been done in torchvision.models). Furthermore, we provide everything you need to implement architectural strategies, task-aware models, and dynamic model expansion.
Logging
: It includes advanced logging and plotting features, including native stdout, file and Tensorboard support (How cool it is to have a complete, interactive dashboard, tracking your experiment metrics in real-time with a single line of code?)
In this series of tutorials, you'll get the chance to learn in-depth all the features offered by each module and sub-module of Avalanche, how to put them together and how to master Avalanche, for a stress-free continual learning prototyping experience!
In the following tutorials we will assume you have already installed Avalanche on your computer or server. If you haven't yet, check out how you can do it following our How to Install guide.
Installing Avalanche has Never Been so Simple
Avalanche has been designed for extreme portability and usability. Indeed, it can be run on every OS and native python environment. 💻🍎🐧
you can install Avalanche with pip:
This will install the core version of Avalanche, without extra packages (e.g., object detection support, reinforcement learning support). To install all the extra packages run:
You can install also specific extra packages by specifying the appropriate code name within the square brackets. This is the complete list of options:
Avalanche will raise an error if you need one extra package and will suggest the appropriate package to install.
Note that in some alternatives to bash like zsh you may need to enclose `avalanche-lib[code]` into quotation marks ( " " ), since square brackets are used as special characters.
If you want, you can install Avalanche directly from the master branch (latest version) in a single command. Make sure to have pytorch already installed in your environment, then execute
To update avalanche to the latest version, uninstall the package with pip uninstall avalanche-lib
and then execute again the pip install command.
We suggest you to use the pip package, but if you need some recent features you may want to install directly from the master branch. In general, the master branch is well tested and safe to use. However, the API of new features may change more frequently or break backward compatibility. Reproducibility is also easier if you use the pip package.
On Linux, alternatively, you can simply run the install_environment.sh
in the Avalanche home directory. The script takes 2 arguments: --python
and --cuda_version
. Check --help
for details.
You can test your installation by running the examples/test_install.py
script. Make sure to include avalanche into your $PYTHONPATH if you are running examples with the command line interface.
If you want to expand Avalanche and help us improve it (see the "From Zero to Hero" Tutorial). In this case we suggest to create an environment in developer-mode as follows (just a couple of more dependencies will be installed).
Assuming you have Anaconda (or Miniconda) installed on your system, you can follow these simple steps:
Install the avalanche-dev-env
environment and activate it.
Install Pytorch + TorchVision (follow the instructions on the website to use conda).
Update the Conda Environment.
These three steps can be accomplished with the following lines of code:
On Linux, alternatively, you can simply run the install_environment_dev.sh
in the Avalanche home directory. The script takes 2 arguments: --python
and --cuda_version
. Check --help
for details.
You can test your installation by running the examples/test_install.py
script. Make sure to include avalanche into your $PYTHONPATH if you are running examples with the command line interface.
That's it. now we have Avalanche up and running and we can start contribute to it!
You can run this chapter and play with it on Google Colaboratory:
Avalnche Features: Benchmarks, Strategies & Metrics
Avalanche is a framework in constant development. Thanks to the support of the community and its active members we plan to extend its features and improve its usability based on the demands of our research community! A the moment, Avalanche is in Beta (v0.3.1). We support a large number of Benchmarks, Strategies and Metrics, that makes it, we believe, the best tool out there for your continual learning research! 💪
You can find the full list of available features on the .
Do you think we are missing some important features? Please ! We deeply value !
You find a complete list of the features on the .
Avalanche supports all the most popular computer vision datasets used in Continual Learning. Some of them are available in , while other have been integrated by us. Most datasets can be automatically downloaded by Avalanche.
Toy datasets: MNIST, Fashion MNIST, KMNIST, EMNIST, QMNIST.
CIFAR: CIFAR10, CIFAR100.
ImageNet: TinyImagenet, MiniImagenet, Imagenet.
Others: EndlessCLDataset, CUB200, OpenLORIS, Stream-51, INATURALIST2018, Omniglot, CLEARImage, ...
All the major continual learning benchmarks are available and ready to use. Benchmarks split the datasets and create the train and test streams:
MNIST: SplitMNIST, RotatedMNIST, PermutedMNIST, SplitFashionMNIST.
CIFAR10: SplitCIFAR10, SplitCIFAR100, SplitCIFAR110.
CORe50: all the CORe50 scenarios are supported.
Others: SplitCUB200, CLStream51, CLEAR.
Baselines: Naive, JointTraining, Cumulative.
Rehearsal: Replay with reservoir sampling and balanced buffers, GSS greedy, CoPE, Generative Replay.
Regularization: EWC, LwF, GEM, AGEM, CWR*, Synaptic Intelligence, MAS.
Architectural: Progressive Neural Networks, multi-head, incremental classifier.
Others: GDumb, iCaRL, AR1, Streaming LDA, LFL.
Avalanche uses and extends pytorch nn.Module
to define continual learning models:
support for nn.Module
s and torchvision
models.
Dynamic output heads for class-incremental scenarios and multi heads for task-incremental scenarios.
support for architectural strategies and dynamically expanding models such as progressive neural networks.
Avalanche provides continuous evaluation of CL strategies with a large set of Metrics. They are collected and logged automatically by the strategy during the training and evaluation loops.
Standard Performance Metrics: accuracy, loss, confusion (averaged over streams or experiences).
CL-Metrics: backward/forward transfer, forgetting.
Computational Resources: CPU and RAM usage, MAC, execution times.
Avalanche provides Continual Learning algorithms (strategies). We are continuously expanding the library with new algorithms and making sure they can reproduce seminal papers results in the sibling project .
and .
Powered by ContinualAI
Avalanche is an End-to-End Continual Learning Library based on PyTorch, born within ContinualAI with the goal of providing a shared and collaborative open-source (MIT licensed) codebase for fast prototyping, training and reproducible evaluation of continual learning algorithms.
Looking for continual learning baselines? In the CL-Baseline sibling project based on Avalanche we reproduce seminal papers results you can directly use in your experiments!
Avalanche can help Continual Learning researchers and practitioners in several ways:
Write less code, prototype faster & reduce errors
Improve reproducibility, modularity and reusability
Increase code efficiency, scalability & portability
Augment impact and usability of your research products
The library is organized in five main modules:
Benchmarks
: This module maintains a uniform API for data handling: mostly generating a stream of data from one or more datasets. It contains all the major CL benchmarks (similar to what has been done for torchvision).
Training
: This module provides all the necessary utilities concerning model training. This includes simple and efficient ways of implement new continual learning strategies as well as a set pre-implemented CL baselines and state-of-the-art algorithms you will be able to use for comparison!
Evaluation
: This modules provides all the utilities and metrics that can help evaluate a CL algorithm with respect to all the factors we believe to be important for a continually learning system.
Models
: In this module you'll be able to find several model architectures and pre-trained models that can be used for your continual learning experiment (similar to what has been done in torchvision.models).
Logging
: It includes advanced logging and plotting features, including native stdout, file and TensorBoard support (How cool it is to have a complete, interactive dashboard, tracking your experiment metrics in real-time with a single line of code?)
Avalanche the first experiment of a End-to-end Library for reproducible continual learning research & development where you can find benchmarks, algorithms, evaluation metrics and much more, in the same place.
Let's make it together 👫 a wonderful ride! 🎈
Check out how your code changes when you start using Avalanche! 👇
We know that learning a new tool may be tough at first. This is why we made Avalanche as easy as possible to learn with a set of resources that will help you along the way.
For example, you may start with our 5-minutes guide that will let you acquire the basics about Avalanche and how you can use it in your research project:
We have also prepared for you a large set of examples & snippets you can plug-in directly into your code and play with:
Having completed these two sections, you will already feel with superpowers ⚡, this is why we have also created an in-depth tutorial that will cover all the aspect of Avalanche in details and make you a true Continual Learner! 👨🎓️
If you used Avalanche in your research project, please remember to cite our reference paper "Avalanche: an End-to-End Library for Continual Learning". This will help us make Avalanche better known in the machine learning community, ultimately making a better tool for everyone:
Avalanche is the flagship open-source collaborative project of ContinualAI: a non profit research organization and the largest open community on Continual Learning for AI.
Do you have a question, do you want to report an issue or simply ask for a new feature? Check out the Questions & Issues center. Do you want to improve Avalanche yourself? Follow these simple rules on How to Contribute.
The Avalanche project is maintained by the collaborative research team ContinualAI Lab and used extensively by the Units of the ContinualAI Research (CLAIR) consortium, a research network of the major continual learning stakeholders around the world.
We are always looking for new awesome members willing to join the ContinualAI Lab, so check out our official website if you want to learn more about us and our activities, or contact us.
Learn more about the Avalanche team and all the people who made it great!
A Brief Introduction to Avalanche
Avalanche was born within ContinualAI with a clear goal in mind:
Pushing Continual Learning to the next level, providing a shared and collaborative library for fast prototyping, training and reproducible evaluation of continual learning algorithms.
As a powerful avalanche, a Continual Learning agent incrementally improves its knowledge and skills over time, building upon the previously acquired ones and learning how to interact with the external world.
We hope Avalanche may trigger the same positive reinforcement loop within our community, moving towards a more collaborative and inclusive way of doing research and helping us tackle bigger problems, faster and better, but together! 👪
Avalanche has several advantages:
Shared & Coherent Codebase: Aren't you tired of re-inventing the wheel in continual learning? We are. Re-producing paper results has always been daunting in machine learning and it is even more so in continual learning. Avalanche makes you stop re-write your (and other people) code all over again with a coherent and shared codebase that provides already all the utilities, benchmark, metrics and baselines you may need for your next great continual learning research project!
Errors Reduction: The more code we write, the more bugs we introduce in our code. This is the rule, not the exception. Avalanche, let you focus on what really matters: defining your CL solution. Benchmarks preparation to training, evaluation and comparison with other methods will be already there for you. This in turn, massively reduce the amount of errors introduced and the time needed to debug your code.
Faster Prototyping: As researchers or data scientists, we have dozens ideas every day and time is always too little to execute them. However, if we think about it, most of the time spent in bringing our ideas to life is consumed in installing software, preparing and cleaning our data, preparing the experiments code infrastructure and so on. Avalanche lets you focus just on the original algorithmic proposal, taking care of most of the rest!
Improved Reproducibility & Portability: One of the great features of Avalanche, is the possibility of reproducing experimental results easily and on any OS. Researchers can simply plug-in their algorithm into the codebase and see how it goes with respect of other researchers' methods. Their algorithm in turn, is used as a baseline for other methods, creating a virtuous circle. This is only possible thanks to the simple, yet powerful idea of providing shared benchmarks, training and evaluation in a single place.
Improved Modularity: Avalanche has been designed with modularity in mind. As you will learn more about Avalanche, you will realize we have sometimes forego simplicity in favor of modularity and reusability (we hate code replication as you do 🤪). However, we believe this will help us scale in the near future as we collaboratively bring this codebase into maturity.
Increased Efficiency & Scalability: Full-stack researchers & data scientists know this, making your algorithm memory and computationally efficient is tough. Avalanche is already optimized for you, so that you can run your ImageNet continual learning experiment on your 8GB laptop (buy a cooling fan 💨) or even try it on embedded devices of your latest product!
But most of all, Avalanche, can help us standardize our field and work better together, more collaboratively, towards our shared goal of making machines learn over time like humans do.
Avalanche the first experiment of a End-to-end Library for reproducible continual learning research where you can find benchmarks, algorithms, evaluation utilities and much more in the same place.
Let's make it together 👫 a wonderful ride! 🎈
First things first: let's start with a good model!
Welcome to the "Models" tutorial of the "From Zero to Hero" series. In this notebook we will talk about the features offered by the models
Avalanche sub-module.
Every continual learning experiment needs a model to train incrementally. You can use any torch.nn.Module
, even pretrained models. The models
sub-module provides the most commonly used architectures in the CL literature.
You can use any model provided in the official ecosystem models as well as the ones provided by !
A continual learning model may change over time. As an example, a classifier may add new units for previously unseen classes, while progressive networks add a new set units after each experience. Avalanche provides DynamicModule
s to support these use cases. DynamicModule
s are torch.nn.Module
s that provide an addition method, adaptation
, that is used to update the model's architecture. The method takes a single argument, the data from the current experience.
For example, an IncrementalClassifier updates the number of output units:
As you can see, after each call to the adaptation
method, the model adds 2 new units to account for the new classes. Notice that no learning occurs at this point since the method only modifies the model's architecture.
Keep in mind that when you use Avalanche strategies you don't have to call the adaptation yourself. Avalanche strategies automatically call the model's adaptation and update the optimizer to include the new parameters.
Some models, such as multi-head classifiers, are designed to exploit task labels. In Avalanche, such models are implemented as MultiTaskModule
s. These are dynamic models (since they need to be updated whenever they encounter a new task) that have an additional task_labels
argument in their forward
method. task_labels
is a tensor with a task id for each sample.
When you use a MultiHeadClassifier
, a new head is initialized whenever a new task is encountered. Avalanche strategies automatically recognize multi-task modules and provide task labels to them.
If you want to define a custom multi-task module you need to override two methods: adaptation
(if needed), and forward_single_task
. The forward
method of the base class will split the mini-batch by task-id and provide single task mini-batches to forward_single_task
.
Alternatively, if you only want to convert a single-head model into a multi-head model, you can use the as_multitask
wrapper, which converts the model for you.
You can run this chapter and play with it on Google Colaboratory: