Welcome to the "Putting All Together" tutorial of the "From Zero to Hero" series. In this part we will summarize the major Avalanche features and how you can put them together for your continual learning experiments.
!pip install avalanche-lib==0.3.0
🛴 A Comprehensive Example
Here we report a complete example of the Avalanche usage:
from torch.optim import SGDfrom torch.nn import CrossEntropyLossfrom avalanche.benchmarks.classic import SplitMNISTfrom avalanche.evaluation.metrics import forgetting_metrics, accuracy_metrics,\ loss_metrics, timing_metrics, cpu_usage_metrics, confusion_matrix_metrics, disk_usage_metricsfrom avalanche.models import SimpleMLPfrom avalanche.logging import InteractiveLogger, TextLogger, TensorboardLoggerfrom avalanche.training.plugins import EvaluationPluginfrom avalanche.training.supervised import Naivescenario =SplitMNIST(n_experiences=5)# MODEL CREATIONmodel =SimpleMLP(num_classes=scenario.n_classes)# DEFINE THE EVALUATION PLUGIN and LOGGERS# The evaluation plugin manages the metrics computation.# It takes as argument a list of metrics, collectes their results and returns# them to the strategy it is attached to.# log to Tensorboardtb_logger =TensorboardLogger()# log to text filetext_logger =TextLogger(open('log.txt', 'a'))# print to stdoutinteractive_logger =InteractiveLogger()eval_plugin =EvaluationPlugin(accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),timing_metrics(epoch=True, epoch_running=True),forgetting_metrics(experience=True, stream=True),cpu_usage_metrics(experience=True),confusion_matrix_metrics(num_classes=scenario.n_classes, save_image=False, stream=True),disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True), loggers=[interactive_logger, text_logger, tb_logger])# CREATE THE STRATEGY INSTANCE (NAIVE)cl_strategy =Naive( model, SGD(model.parameters(), lr=0.001, momentum=0.9),CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100, evaluator=eval_plugin)# TRAINING LOOPprint('Starting experiment...')results = []for experience in scenario.train_stream:print("Start of experience: ", experience.current_experience)print("Current Classes: ", experience.classes_in_this_experience)# train returns a dictionary which contains all the metric values res = cl_strategy.train(experience)print('Training completed')print('Computing accuracy on the whole test set')# test also returns a dictionary which contains all the metric values results.append(cl_strategy.eval(scenario.test_stream))
🤝 Run it on Google Colab
You can run this chapter and play with it on Google Colaboratory: