Logging... logging everywhere! 🔮
Welcome to the "Logging" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the Avalanche
!pip install avalanche-lib==0.1.0
In the previous tutorial we have learned how to evaluate a continual learning algorithm in Avalanche, through different metrics that can be used off-the-shelf via the Evaluation Plugin or stand-alone. However, computing metrics and collecting results, may not be enough at times.
While running complex experiments with long waiting times, logging results over-time is fundamental to "babysit" your experiments in real-time, or even understand what went wrong in the aftermath.
This is why in Avalanche we decided to put a strong emphasis on logging and provide a number of loggers that can be used with any set of metrics!
Avalanche at the moment supports four main Loggers:
- InteractiveLogger: This logger provides a nice progress bar and displays real-time metrics results in an interactive way (meant for
- TextLogger: This logger, mostly intended for file logging, is the plain text version of the
InteractiveLogger. Keep in mind that it may be very verbose.
In order to keep track of when each metric value has been logged, we leverage two
global counters, one for the training phase, one for the evaluation phase. You can see the
global countervalue reported in the x axis of the logged plots.
global counteris an ever-increasing value which starts from 0 and it is increased by one each time a training/evaluation iteration is performed (i.e. after each training/evaluation minibatch). The
global countersare updated automatically by the strategy.
from torch.optim import SGD
from torch.nn import CrossEntropyLoss
from avalanche.benchmarks.classic import SplitMNIST
from avalanche.evaluation.metrics import forgetting_metrics, \
accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \
from avalanche.models import SimpleMLP
from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger, WandBLogger
from avalanche.training.plugins import EvaluationPlugin
from avalanche.training.strategies import Naive
benchmark = SplitMNIST(n_experiences=5, return_task_id=False)
# MODEL CREATION
model = SimpleMLP(num_classes=benchmark.n_classes)
# DEFINE THE EVALUATION PLUGIN and LOGGERS
# The evaluation plugin manages the metrics computation.
# It takes as argument a list of metrics, collectes their results and returns
# them to the strategy it is attached to.
loggers = 
# log to Tensorboard
# log to text file
# print to stdout
# W&B logger - comment this if you don't have a W&B account
eval_plugin = EvaluationPlugin(
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
# TRAINING LOOP
results = 
for experience in benchmark.train_stream:
# train returns a dictionary which contains all the metric values
res = cl_strategy.train(experience)
print('Computing accuracy on the whole test set')
# test also returns a dictionary which contains all the metric values
# need to manually call W&B run end since we are in a notebook
If the available loggers are not sufficient to suit your needs, you can always implement a custom logger by specializing the behaviors of the
This completes the "Logging" tutorial for the "From Zero to Hero" series. We hope you enjoyed it!