trainingmodule in Avalanche is designed with modularity in mind. Its main goals are to:
trainingmodule includes three main components:
BaseSGDTemplatecontains all the implemented methods to deal with strategies based on SGD).
torch.optim.Optimizeralready initialized on your
eval. Both of them, accept either a single experience(
Experience) or a list of them, for maximum flexibility.
train_streamprovided by the scenario.
pluginsthat will be executed during the training/evaluation loops.
pluginslist to the
Naivestrategy). Therefore, the easiest way to define a custom strategy such as a regularization or replay strategy, is to define it as a custom plugin. The advantage of plugins is that they can be combined, as long as they are compatible, i.e. they do not modify the same part of the state. The disadvantage is that in order to do so you need to understand the strategy loop, which can be a bit complex at first.
BaseTemplate, from which all the Avalanche's strategies inherit. Most template's methods can be safely overridden (with some caveats that we will see later).
modelswithout using Avalanche's strategies!
SupervisedTemplate). These templates provide:
before/afterare the methods responsible for calling the plugins. Notice that before the start of each experience during training we have several phases:
modelstutorial) are updated by calling their
self.clock: keeps track of several event counters.
self.experience: the current experience.
self.adapted_dataset: the data modified by the dataset adaptation phase.
self.dataloader: the current dataloader.
self.mbatch: the current mini-batch. For supervised classification problems, mini-batches have the form
<x, y, t>, where
xis the input,
yis the target class, and
tis the task label.
self.mb_output: the current model's output.
self.loss: the current loss.
Trueif the strategy is in training mode.
Naivestrategy). This approach reduces the overhead and code duplication, improving code readability and prototyping speed.
SupervisedPlugin) and implements the callbacks that you need. The exact callback to use depend on the aim of your plugin. You can use the loop shown above to understand what callbacks you need to use. For example, we show below a simple replay plugin that uses
after_training_expto update the buffer after each training experience, and the
before_training_expto customize the dataloader. Notice that
before_training_expis executed after
make_train_dataloader, which means that the
Naivestrategy already updated the dataloader. If we used another callback, such as
before_train_dataset_adaptation, our dataloader would have been overwritten by the
Naivestrategy. Plugin methods always receive the
strategyas an argument, so they can access and modify the strategy's state.
before/after) at the appropriate points. For example,
after_trainingbefore and after the training loops, respectively. The easiest way to avoid mistakes is to start from the template's method that you want to override and modify it to your own needs without removing the callbacks handling.
evaluationtutorial) uses the strategy callbacks.
SupervisedTemplate, for continual supervised strategies, provides the global state of the loop in the strategy's attributes, which you can safely use when you override a method. For instance, the
Cumulativestrategy trains a model continually on the union of all the experiences encountered so far. To achieve this, the cumulative strategy overrides
adapt_train_datasetand updates `self.adapted_dataset' by concatenating all the previous experiences with the current one.