Loggers
Logging... logging everywhere! ๐Ÿ”ฎ
Welcome to the "Logging" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the Avalanche logging module.
1
!pip install git+https://github.com/ContinualAI/avalanche.git
Copied!

๐Ÿ“‘ The Logging Module

In the previous tutorial we have learned how to evaluate a continual learning algorithm in Avalanche, through different metrics that can be used off-the-shelf via the Evaluation Plugin or stand-alone. However, computing metrics and collecting results, may not be enough at times.
While running complex experiments with long waiting times, logging results over-time is fundamental to "babysit" your experiments in real-time, or even understand what went wrong in the aftermath.
This is why in Avalanche we decided to put a strong emphasis on logging and provide a number of loggers that can be used with any set of metrics!

Loggers

Avalanche at the moment supports four main Loggers:
  • InteractiveLogger: This logger provides a nice progress bar and displays real-time metrics results in an interactive way (meant for stdout).
  • TextLogger: This logger, mostly intended for file logging, is the plain text version of the InteractiveLogger. Keep in mind that it may be very verbose.
  • TensorboardLogger: It logs all the metrics on Tensorboard in real-time. Perfect for real-time plotting.
  • WandBLogger: It leverages Weights and Biases tools to log metrics and results on a dashboard. It requires a W&B account.
In order to keep track of when each metric value has been logged, we leverage two global counters, one for the training phase, one for the evaluation phase. You can see the global counter value reported in the x axis of the logged plots.
Each global counter is an ever-increasing value which starts from 0 and it is increased by one each time a training/evaluation iteration is performed (i.e. after each training/evaluation minibatch). The global counters are updated automatically by the strategy.

How to use loggers

1
from torch.optim import SGD
2
from torch.nn import CrossEntropyLoss
3
from avalanche.benchmarks.classic import SplitMNIST
4
from avalanche.evaluation.metrics import forgetting_metrics, \
5
accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \
6
confusion_matrix_metrics, disk_usage_metrics
7
from avalanche.models import SimpleMLP
8
from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger, WandBLogger
9
from avalanche.training.plugins import EvaluationPlugin
10
from avalanche.training.strategies import Naive
11
โ€‹
12
benchmark = SplitMNIST(n_experiences=5, return_task_id=False)
13
โ€‹
14
# MODEL CREATION
15
model = SimpleMLP(num_classes=benchmark.n_classes)
16
โ€‹
17
# DEFINE THE EVALUATION PLUGIN and LOGGERS
18
# The evaluation plugin manages the metrics computation.
19
# It takes as argument a list of metrics, collectes their results and returns
20
# them to the strategy it is attached to.
21
โ€‹
22
โ€‹
23
loggers = []
24
โ€‹
25
# log to Tensorboard
26
loggers.append(TensorboardLogger())
27
โ€‹
28
# log to text file
29
loggers.append(TextLogger(open('log.txt', 'a')))
30
โ€‹
31
# print to stdout
32
loggers.append(InteractiveLogger())
33
โ€‹
34
# W&B logger - comment this if you don't have a W&B account
35
loggers.append(WandBLogger(project_name="avalanche", run_name="test"))
36
โ€‹
37
eval_plugin = EvaluationPlugin(
38
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
39
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
40
timing_metrics(epoch=True, epoch_running=True),
41
cpu_usage_metrics(experience=True),
42
forgetting_metrics(experience=True, stream=True),
43
confusion_matrix_metrics(num_classes=benchmark.n_classes, save_image=True,
44
stream=True),
45
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
46
loggers=loggers,
47
benchmark=benchmark
48
)
49
โ€‹
50
# CREATE THE STRATEGY INSTANCE (NAIVE)
51
cl_strategy = Naive(
52
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
53
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
54
evaluator=eval_plugin)
55
โ€‹
56
# TRAINING LOOP
57
print('Starting experiment...')
58
results = []
59
for experience in benchmark.train_stream:
60
# train returns a dictionary which contains all the metric values
61
res = cl_strategy.train(experience)
62
print('Training completed')
63
โ€‹
64
print('Computing accuracy on the whole test set')
65
# test also returns a dictionary which contains all the metric values
66
results.append(cl_strategy.eval(benchmark.test_stream))
Copied!
1
# need to manually call W&B run end since we are in a notebook
2
import wandb
3
wandb.finish()
Copied!

Create your Logger

If the available loggers are not sufficient to suit your needs, you can always implement a custom logger by specializing the behaviors of the StrategyLogger base class.
This completes the "Logging" tutorial for the "From Zero to Hero" series. We hope you enjoyed it!

๐Ÿค Run it on Google Colab

You can run this chapter and play with it on Google Colaboratory: โ€‹
โ€‹
โ€‹
Last modified 2d ago