Avalanche: an End-to-End Library for Continual Learning
Powered by ContinualAI
Avalanche is an End-to-End Continual Learning Library based on PyTorch, born within ContinualAI with the unique goal of providing a shared and collaborative open-source (MIT licensed) codebase for fast prototyping, training and reproducible evaluation of continual learning algorithms.
Avalanche can help Continual Learning researchers and practitioners in several ways:
  • Write less code, prototype faster & reduce errors
  • Improve reproducibility
  • Improve modularity and reusability
  • Increase code efficiency, scalability & portability
  • Augment impact and usability of your research products
The library is organized in five main modules:
  • Benchmarks: This module maintains a uniform API for data handling: mostly generating a stream of data from one or more datasets. It contains all the major CL benchmarks (similar to what has been done for torchvision).
  • Training: This module provides all the necessary utilities concerning model training. This includes simple and efficient ways of implement new continual learning strategies as well as a set pre-implemented CL baselines and state-of-the-art algorithms you will be able to use for comparison!
  • Evaluation: This modules provides all the utilities and metrics that can help evaluate a CL algorithm with respect to all the factors we believe to be important for a continually learning system.
  • Models: In this module you'll be able to find several model architectures and pre-trained models that can be used for your continual learning experiment (similar to what has been done in torchvision.models).
  • Logging: It includes advanced logging and plotting features, including native stdout, file and TensorBoard support (How cool it is to have a complete, interactive dashboard, tracking your experiment metrics in real-time with a single line of code?)
Avalanche the first experiment of a End-to-end Library for reproducible continual learning research & development where you can find benchmarks, algorithms, evaluation metrics and much more, in the same place.
Let's make it together ๐Ÿ‘ซ a wonderful ride! ๐ŸŽˆ
Check out how your code changes when you start using Avalanche! ๐Ÿ‘‡
With Avalanche
Without Avalanche
1
import torch
2
from torch.nn import CrossEntropyLoss
3
from torch.optim import SGD
4
โ€‹
5
from avalanche.benchmarks.classic import PermutedMNIST
6
from avalanche.training.plugins import EvaluationPlugin
7
from avalanche.evaluation.metrics import accuracy_metrics
8
from avalanche.models import SimpleMLP
9
from avalanche.training.strategies import Naive
10
โ€‹
11
# Config
12
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
13
โ€‹
14
# model
15
model = SimpleMLP(num_classes=10)
16
โ€‹
17
# CL Benchmark Creation
18
perm_mnist = PermutedMNIST(n_experiences=3)
19
train_stream = perm_mnist.train_stream
20
test_stream = perm_mnist.test_stream
21
โ€‹
22
# Prepare for training & testing
23
optimizer = SGD(model.parameters(), lr=0.001, momentum=0.9)
24
criterion = CrossEntropyLoss()
25
eval_plugin = EvaluationPlugin(
26
accuracy_metrics(minibatch=True, epoch=True, epoch_running=True,
27
experience=True, stream=True))
28
โ€‹
29
# Continual learning strategy
30
cl_strategy = Naive(
31
model, optimizer, criterion, train_mb_size=32, train_epochs=2,
32
eval_mb_size=32, evaluator=eval_plugin, device=device)
33
โ€‹
34
# train and test loop
35
results = []
36
for train_task in train_stream:
37
cl_strategy.train(train_task, num_workers=4)
38
results.append(cl_strategy.eval(test_stream))
Copied!
1
import torch
2
import torch.nn as nn
3
from torch.nn import CrossEntropyLoss
4
from torch.optim import SGD
5
from torchvision import transforms
6
from torchvision.datasets import MNIST
7
from torchvision.transforms import ToTensor, RandomCrop
8
from torch.utils.data import DataLoader
9
import numpy as np
10
from copy import copy
11
โ€‹
12
# Config
13
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
14
โ€‹
15
# model
16
class SimpleMLP(nn.Module):
17
โ€‹
18
def __init__(self, num_classes=10, input_size=28*28):
19
super(SimpleMLP, self).__init__()
20
โ€‹
21
self.features = nn.Sequential(
22
nn.Linear(input_size, 512),
23
nn.ReLU(inplace=True),
24
nn.Dropout(),
25
)
26
self.classifier = nn.Linear(512, num_classes)
27
self._input_size = input_size
28
โ€‹
29
def forward(self, x):
30
x = x.contiguous()
31
x = x.view(x.size(0), self._input_size)
32
x = self.features(x)
33
x = self.classifier(x)
34
return x
35
model = SimpleMLP(num_classes=10)
36
โ€‹
37
# CL Benchmark Creation
38
list_train_dataset = []
39
list_test_dataset = []
40
rng_permute = np.random.RandomState(0)
41
train_transform = transforms.Compose([
42
RandomCrop(28, padding=4),
43
ToTensor(),
44
transforms.Normalize((0.1307,), (0.3081,))
45
])
46
test_transform = transforms.Compose([
47
ToTensor(),
48
transforms.Normalize((0.1307,), (0.3081,))
49
])
50
โ€‹
51
# permutation transformation
52
class PixelsPermutation(object):
53
def __init__(self, index_permutation):
54
self.permutation = index_permutation
55
โ€‹
56
def __call__(self, x):
57
return x.view(-1)[self.permutation].view(1, 28, 28)
58
โ€‹
59
def get_permutation():
60
return torch.from_numpy(rng_permute.permutation(784)).type(torch.int64)
61
โ€‹
62
# for every incremental step
63
permutations = []
64
for i in range(3):
65
# choose a random permutation of the pixels in the image
66
idx_permute = get_permutation()
67
current_perm = PixelsPermutation(idx_permute)
68
permutations.append(idx_permute)
69
โ€‹
70
# add the permutation to the default dataset transformation
71
train_transform_list = train_transform.transforms.copy()
72
train_transform_list.append(current_perm)
73
new_train_transform = transforms.Compose(train_transform_list)
74
โ€‹
75
test_transform_list = test_transform.transforms.copy()
76
test_transform_list.append(current_perm)
77
new_test_transform = transforms.Compose(test_transform_list)
78
โ€‹
79
# get the datasets with the constructed transformation
80
permuted_train = MNIST(root='./data/mnist',
81
download=True, transform=new_train_transform)
82
permuted_test = MNIST(root='./data/mnist',
83
train=False,
84
download=True, transform=new_test_transform)
85
list_train_dataset.append(permuted_train)
86
list_test_dataset.append(permuted_test)
87
โ€‹
88
# Train
89
optimizer = SGD(model.parameters(), lr=0.001, momentum=0.9)
90
criterion = CrossEntropyLoss()
91
โ€‹
92
for task_id, train_dataset in enumerate(list_train_dataset):
93
โ€‹
94
train_data_loader = DataLoader(
95
train_dataset, num_workers=4, batch_size=32)
96
97
for ep in range(2):
98
for iteration, (train_mb_x, train_mb_y) in enumerate(train_data_loader):
99
optimizer.zero_grad()
100
train_mb_x = train_mb_x.to(device)
101
train_mb_y = train_mb_y.to(device)
102
โ€‹
103
# Forward
104
logits = model(train_mb_x)
105
# Loss
106
loss = criterion(logits, train_mb_y)
107
# Backward
108
loss.backward()
109
# Update
110
optimizer.step()
111
โ€‹
112
# Test
113
acc_results = []
114
for task_id, test_dataset in enumerate(list_test_dataset):
115
116
test_data_loader = DataLoader(
117
test_dataset, num_workers=4, batch_size=32)
118
119
correct = 0
120
for iteration, (test_mb_x, test_mb_y) in enumerate(test_data_loader):
121
โ€‹
122
# Move mini-batch data to device
123
test_mb_x = test_mb_x.to(device)
124
test_mb_y = test_mb_y.to(device)
125
โ€‹
126
# Forward
127
test_logits = model(test_mb_x)
128
โ€‹
129
# Loss
130
test_loss = criterion(test_logits, test_mb_y)
131
โ€‹
132
# compute acc
133
correct += test_mb_y.eq(test_logits.argmax(dim=1)).sum().item()
134
135
acc_results.append(correct / len(test_dataset))
Copied!

๐Ÿšฆ Getting Started

We know that learning a new tool may be tough at first. This is why we made Avalanche as easy as possible to learn with a set of resources that will help you along the way.
For example, you may start with our 5-minutes guide that will let you acquire the basics about Avalanche and how you can use it in your research project:
We have also prepared for you a large set of examples & snippets you can plug-in directly into your code and play with:
Having completed these two sections, you will already feel with superpowers โšก, this is why we have also created an in-depth tutorial that will cover all the aspect of Avalanche in details and make you a true Continual Learner! ๐Ÿ‘จโ€๐ŸŽ“๏ธ

๐Ÿ“‘ Cite Avalanche

If you used Avalanche in your research project, please remember to cite our reference paper "Avalanche: an End-to-End Library for Continual Learning". This will help us make Avalanche better known in the machine learning community, ultimately making a better tool for everyone:
1
@InProceedings{lomonaco2021avalanche,
2
title={Avalanche: an End-to-End Library for Continual Learning},
3
author={Vincenzo Lomonaco and Lorenzo Pellegrini and Andrea Cossu and Antonio Carta and Gabriele Graffieti and Tyler L. Hayes and Matthias De Lange and Marc Masana and Jary Pomponi and Gido van de Ven and Martin Mundt and Qi She and Keiland Cooper and Jeremy Forest and Eden Belouadah and Simone Calderara and German I. Parisi and Fabio Cuzzolin and Andreas Tolias and Simone Scardapane and Luca Antiga and Subutai Amhad and Adrian Popescu and Christopher Kanan and Joost van de Weijer and Tinne Tuytelaars and Davide Bacciu and Davide Maltoni},
4
booktitle={Proceedings of IEEE Conference on Computer Vision and Pattern Recognition},
5
series={2nd Continual Learning in Computer Vision Workshop},
6
year={2021}
7
}
Copied!

๐Ÿ—‚๏ธ Maintained by ContinualAI Lab

Avalanche is the flagship open-source collaborative project of ContinualAI: a non profit research organization and the largest open community on Continual Learning for AI.
Do you have a question, do you want to report an issue or simply ask for a new feature? Check out the Questions & Issues center. Do you want to improve Avalanche yourself? Follow these simple rules on How to Contribute.
The Avalanche project is maintained by the collaborative research team ContinualAI Lab and used extensively by the Units of the ContinualAI Research (CLAIR) consortium, a research network of the major continual learning stakeholders around the world.
We are always looking for new awesome members willing to join the ContinualAI Lab, so check out our official website if you want to learn more about us and our activities, or contact us.