Evaluation
Automatic Evaluation with Pre-implemented Metrics
Welcome to the "Evaluation" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the evaluation module.
1
!pip install git+https://github.com/ContinualAI/avalanche.git
Copied!

๐Ÿ“ˆ The Evaluation Module

The evaluation module is quite straightforward: it offers all the basic functionalities to evaluate and keep track of a continual learning experiment.
This is mostly done through the Metrics: a set of classes which implement the main continual learning metrics computation like A_ccuracy_, F_orgetting_, M_emory Usage_, R_unning Times_, etc. At the moment, in Avalanche we offer a number of pre-implemented metrics you can use for your own experiments. We made sure to include all the major accuracy-based metrics but also the ones related to computation and memory.
Each metric comes with a standalone class and a set of plugin classes aimed at emitting metric values on specific moments during training and evaluation.

Standalone metric

As an example, the standalone Accuracy class can be used to monitor the average accuracy over a stream of <input,target> pairs. The class provides an update method to update the current average accuracy, a result method to print the current average accuracy and a reset method to set the current average accuracy to zero. The call to resultdoes not change the metric state. The Accuracy metric requires the task_labels parameter, which specifies which task is associated with the current patterns. The metric returns a dictionary mapping task labels to accuracy values.
1
import torch
2
from avalanche.evaluation.metrics import Accuracy
3
โ€‹
4
task_labels = 0 # we will work with a single task
5
# create an instance of the standalone Accuracy metric
6
# initial accuracy is 0 for each task
7
acc_metric = Accuracy()
8
print("Initial Accuracy: ", acc_metric.result()) # output {}
9
โ€‹
10
# two consecutive metric updates
11
real_y = torch.tensor([1, 2]).long()
12
predicted_y = torch.tensor([1, 0]).float()
13
acc_metric.update(real_y, predicted_y, task_labels)
14
acc = acc_metric.result()
15
print("Average Accuracy: ", acc) # output 0.5 on task 0
16
predicted_y = torch.tensor([1,2]).float()
17
acc_metric.update(real_y, predicted_y, task_labels)
18
acc = acc_metric.result()
19
print("Average Accuracy: ", acc) # output 0.75 on task 0
20
โ€‹
21
# reset accuracy
22
acc_metric.reset()
23
print("After reset: ", acc_metric.result()) # output {}
Copied!

Plugin metric

If you want to integrate the available metrics automatically in the training and evaluation flow, you can use plugin metrics, like EpochAccuracy which logs the accuracy after each training epoch, or ExperienceAccuracy which logs the accuracy after each evaluation experience. Each of these metrics emits a curve composed by its values at different points in time (e.g. on different training epochs). In order to simplify the use of these metrics, we provided utility functions with which you can create different plugin metrics in one shot. The results of these functions can be passed as parameters directly to the EvaluationPlugin(see below).
We recommend to use the helper functions when creating plugin metrics.
1
from avalanche.evaluation.metrics import accuracy_metrics, \
2
loss_metrics, forgetting_metrics, bwt_metrics,\
3
confusion_matrix_metrics, cpu_usage_metrics, \
4
disk_usage_metrics, gpu_usage_metrics, MAC_metrics, \
5
ram_usage_metrics, timing_metrics
6
โ€‹
7
# you may pass the result to the EvaluationPlugin
8
metrics = accuracy_metrics(epoch=True, experience=True)
Copied!

๐Ÿ“Evaluation Plugin

The Evaluation Plugin is the object in charge of configuring and controlling the evaluation procedure. This object can be passed to a Strategy as a "special" plugin through the evaluator attribute.
The Evaluation Plugin accepts as inputs the plugin metrics you want to track. In addition, you can add one or more loggers to print the metrics in different ways (on file, on standard output, on Tensorboard...).
It is also recommended to pass to the Evaluation Plugin the benchmark instance used in the experiment. This allows the plugin to check for consistency during metrics computation. For example, the Evaluation Plugin checks that the strategy.eval calls are performed on the same stream or sub-stream. Otherwise, same metric could refer to different portions of the stream. These checks can be configured to raise errors (stopping computation) or only warnings.
1
from torch.nn import CrossEntropyLoss
2
from torch.optim import SGD
3
from avalanche.benchmarks.classic import SplitMNIST
4
from avalanche.evaluation.metrics import forgetting_metrics, \
5
accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \
6
confusion_matrix_metrics, disk_usage_metrics
7
from avalanche.models import SimpleMLP
8
from avalanche.logging import InteractiveLogger
9
from avalanche.training.plugins import EvaluationPlugin
10
from avalanche.training.strategies import Naive
11
โ€‹
12
benchmark = SplitMNIST(n_experiences=5)
13
โ€‹
14
# MODEL CREATION
15
model = SimpleMLP(num_classes=benchmark.n_classes)
16
โ€‹
17
# DEFINE THE EVALUATION PLUGIN
18
# The evaluation plugin manages the metrics computation.
19
# It takes as argument a list of metrics, collectes their results and returns
20
# them to the strategy it is attached to.
21
โ€‹
22
eval_plugin = EvaluationPlugin(
23
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
24
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
25
timing_metrics(epoch=True),
26
forgetting_metrics(experience=True, stream=True),
27
cpu_usage_metrics(experience=True),
28
confusion_matrix_metrics(num_classes=benchmark.n_classes, save_image=False, stream=True),
29
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
30
loggers=[InteractiveLogger()],
31
benchmark=benchmark,
32
strict_checks=False
33
)
34
โ€‹
35
# CREATE THE STRATEGY INSTANCE (NAIVE)
36
cl_strategy = Naive(
37
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
38
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
39
evaluator=eval_plugin)
40
โ€‹
41
# TRAINING LOOP
42
print('Starting experiment...')
43
results = []
44
for experience in benchmark.train_stream:
45
# train returns a dictionary which contains all the metric values
46
res = cl_strategy.train(experience)
47
print('Training completed')
48
โ€‹
49
print('Computing accuracy on the whole test set')
50
# test also returns a dictionary which contains all the metric values
51
results.append(cl_strategy.eval(benchmark.test_stream))
Copied!

Implement your own metric

To implement a standalone metric, you have to subclass Metric class.
1
from avalanche.evaluation import Metric
2
โ€‹
3
โ€‹
4
# a standalone metric implementation
5
class MyStandaloneMetric(Metric[float]):
6
"""
7
This metric will return a `float` value
8
"""
9
def __init__(self):
10
"""
11
Initialize your metric here
12
"""
13
super().__init__()
14
pass
15
โ€‹
16
def update(self):
17
"""
18
Update metric value here
19
"""
20
pass
21
โ€‹
22
def result(self) -> float:
23
"""
24
Emit the metric result here
25
"""
26
return 0
27
โ€‹
28
def reset(self):
29
"""
30
Reset your metric here
31
"""
32
pass
Copied!
To implement a plugin metric you have to subclass PluginMetric class
1
from avalanche.evaluation import PluginMetric
2
from avalanche.evaluation.metrics import Accuracy
3
from avalanche.evaluation.metric_results import MetricValue
4
from avalanche.evaluation.metric_utils import get_metric_name
5
โ€‹
6
โ€‹
7
class MyPluginMetric(PluginMetric[float]):
8
"""
9
This metric will return a `float` value after
10
each training epoch
11
"""
12
โ€‹
13
def __init__(self):
14
"""
15
Initialize the metric
16
"""
17
super().__init__()
18
โ€‹
19
self._accuracy_metric = Accuracy()
20
โ€‹
21
def reset(self) -> None:
22
"""
23
Reset the metric
24
"""
25
self._accuracy_metric.reset()
26
โ€‹
27
def result(self) -> float:
28
"""
29
Emit the result
30
"""
31
return self._accuracy_metric.result()
32
โ€‹
33
def after_training_iteration(self, strategy: 'PluggableStrategy') -> None:
34
"""
35
Update the accuracy metric with the current
36
predictions and targets
37
"""
38
# task labels defined for each experience
39
task_labels = strategy.experience.task_labels
40
if len(task_labels) > 1:
41
# task labels defined for each pattern
42
task_labels = strategy.mb_task_id
43
else:
44
task_labels = task_labels[0]
45
46
self._accuracy_metric.update(strategy.mb_output, strategy.mb_y,
47
task_labels)
48
โ€‹
49
def before_training_epoch(self, strategy: 'PluggableStrategy') -> None:
50
"""
51
Reset the accuracy before the epoch begins
52
"""
53
self.reset()
54
โ€‹
55
def after_training_epoch(self, strategy: 'PluggableStrategy'):
56
"""
57
Emit the result
58
"""
59
return self._package_result(strategy)
60
61
62
def _package_result(self, strategy):
63
"""Taken from `GenericPluginMetric`, check that class out!"""
64
metric_value = self.accuracy_metric.result()
65
add_exp = False
66
plot_x_position = strategy.clock.train_iterations
67
โ€‹
68
if isinstance(metric_value, dict):
69
metrics = []
70
for k, v in metric_value.items():
71
metric_name = get_metric_name(
72
self, strategy, add_experience=add_exp, add_task=k)
73
metrics.append(MetricValue(self, metric_name, v,
74
plot_x_position))
75
return metrics
76
else:
77
metric_name = get_metric_name(self, strategy,
78
add_experience=add_exp,
79
add_task=True)
80
return [MetricValue(self, metric_name, metric_value,
81
plot_x_position)]
82
โ€‹
83
def __str__(self):
84
"""
85
Here you can specify the name of your metric
86
"""
87
return "Top1_Acc_Epoch"
Copied!

Accessing metric values

If you want to access all the metrics computed during training and evaluation, you have to make sure that collect_all=True is set when creating the EvaluationPlugin (default option is True). This option maintains an updated version of all metric results in the plugin, which can be retrieved by calling evaluation_plugin.get_all_metrics(). You can call this methods whenever you need the metrics.
The result is a dictionary with full metric names as keys and a tuple of two lists as values. The first list stores all the x values recorded for that metric. Each x value represents the time step at which the corresponding metric value has been computed. The second list stores metric values associated to the corresponding x value.
1
eval_plugin2 = EvaluationPlugin(
2
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
3
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
4
forgetting_metrics(experience=True, stream=True),
5
timing_metrics(epoch=True),
6
cpu_usage_metrics(experience=True),
7
confusion_matrix_metrics(num_classes=benchmark.n_classes, save_image=False, stream=True),
8
disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
9
collect_all=True, # this is default value anyway
10
loggers=[InteractiveLogger()],
11
benchmark=benchmark
12
)
13
โ€‹
14
# since no training and evaluation has been performed, this will return an empty dict.
15
metric_dict = eval_plugin2.get_all_metrics()
16
print(metric_dict)
Copied!
1
d = eval_plugin.get_all_metrics()
2
d['Top1_Acc_Epoch/train_phase/train_stream/Task000']
Copied!
Alternatively, the train and eval method of every strategy returns a dictionary storing, for each metric, the last value recorded for that metric. You can use these dictionaries to incrementally accumulate metrics.
1
print(res)
Copied!
1
print(results[-1])
Copied!
This completes the "Evaluation" tutorial for the "From Zero to Hero" series. We hope you enjoyed it!

๐Ÿค Run it on Google Colab

You can run this chapter and play with it on Google Colaboratory: โ€‹
โ€‹
โ€‹
Last modified 3d ago