Avalanche
GitHubAPI DocCL-BaselinesAvalanche-RLOnline CL Repo
Avalanche - v0.4.0
Avalanche - v0.4.0
  • Avalanche: an End-to-End Library for Continual Learning
  • 📌Getting Started
    • Introduction
    • Current Release
    • How to Install
    • Learn Avalanche in 5 Minutes
  • 📙From Zero to Hero Tutorial
    • Introduction
    • Models
    • Benchmarks
    • Training
    • Evaluation
    • Loggers
    • Putting All Together
    • Extending Avalanche
    • Contribute to Avalanche
  • How-Tos
    • AvalancheDataset
      • avalanche-datasets
      • avalanche-transformations
    • Dataloaders, Buffers, and Replay
    • Save and load checkpoints
  • 📝Examples
    • Models
    • Benchmarks
    • Training
    • Evaluation
    • Loggers
  • 💻Code Documentation
    • Avalanche API
  • How to Contribute
    • Guidelines
  • ❓Questions and Issues
    • Ask Your Question
    • Add Your Issue
    • Request a Feature
    • Give Feedback
    • FAQ
  • 👪About Us
    • The People
    • Join Us!
    • Slack
    • Email
    • Twitter
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Examples

Training

Baselines and Strategies Code Examples

PreviousBenchmarksNextEvaluation

Was this helpful?

Avalanche offers significant support for training (with templates, strategies and plug-ins). Here you can find a list of examples related to the training and some strategies available in Avalanche (each strategy reproduces original paper results in the repository:

  • : this example shows how to take a stream of experiences and train simultaneously on all of them. This is useful to implement the "offline" or "multi-task" upper bound.

  • : simple example on the usage of replay in Avalanche.

  • : this is a simple example on how to use the AR1 strategy.

  • : this is a simple example on how to use the CoPE plugin. It's an example in the online data incremental setting, where both learning and evaluation is completely task-agnostic.

  • : how to define your own cumulative strategy based on the different Data Loaders made available in Avalanche.

  • : this is a simple example on how to use the Deep SLDA strategy.

  • : this example shows how to use early stopping to dynamically stop the training procedure when the model converged instead of training for a fixed number of epochs.

  • : this example shows how to run object detection/segmentation tasks.

  • : this example shows how to run object detection/segmentation tasks with a toy benchmark based on the LVIS dataset.

  • : set of examples showing how you can use Avalanche for distributed training of object detector.

  • : this example tests EWC on Split MNIST and Permuted MNIST.

  • : this example tests LWF on Permuted MNIST.

  • : this example shows how to use GEM and A-GEM strategies on MNIST.

  • : this example shows how to create a stream of pre-trained model from which to learn.

  • : this is a simple example on how to implement generative replay in Avalanche.

  • : simple example to show how to use the iCARL strategy.

  • : example on how to use a meta continual learning in Avalanche.

  • : example of the RWalk strategy usage.

  • : example to run a naive strategy in an online setting.

  • : this is a simple example on how to use the Synaptic Intelligence Plugin.

  • : sequence classification example using torchaudio and Speech Commands.

📝
CL-Baselines
Joint-Training
Replay strategy
AR1 strategy
CoPE Strategy
Cumulative Strategy
Deep SLDA
Early Stopping
Object Detection
Object Detection with Elvis
Object Detection Training
EWC on MNIST
LWF on MNIST
GEM and A-GEM on MNIST
Ex-Model Continual Learning
Generative Replay
iCARL strategy
LaMAML strategy
RWalk strategy
Online Naive
Synaptic Intelligence
Continual Sequence Classification