Avalanche
GitHubAPI DocCL-BaselinesAvalanche-RLOnline CL Repo
Avalanche - v0.3.0
Avalanche - v0.3.0
  • Avalanche: an End-to-End Library for Continual Learning
  • 📌Getting Started
    • Introduction
    • Current Release
    • How to Install
    • Learn Avalanche in 5 Minutes
  • 📙From Zero to Hero Tutorial
    • Introduction
    • Models
    • Benchmarks
    • Training
    • Evaluation
    • Loggers
    • Putting All Together
    • Extending Avalanche
    • Contribute to Avalanche
  • How-Tos
    • AvalancheDataset
      • avalanche-datasets
      • avalanche-transformations
    • Dataloaders, Buffers, and Replay
    • checkpoints
  • 📝Examples
    • Models
    • Benchmarks
    • Training
    • Evaluation
    • Loggers
  • 💻Code Documentation
    • Avalanche API
  • How to Contribute
    • Guidelines
  • ❓Questions and Issues
    • Ask Your Question
    • Add Your Issue
    • Request a Feature
    • Give Feedback
    • FAQ
  • 👪About Us
    • The People
    • Join Us!
    • Slack
    • Email
    • Twitter
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Examples

Benchmarks

Benchmarks and DatasetCode Examples

PreviousModelsNextTraining

Last updated 2 years ago

Was this helpful?

Avalanche offers significant support for defining your own benchmarks (instantiation of one scenario with one or multiple datasets) or using "classic" benchmarks already consolidate in the literature.

You can find examples related to the benchmarks here:

  • : in this simple example we show all the different ways you can use MNIST with Avalanche.

  • : in this example a CIFAR100 is used with its canonical split in 10 experiences, 10 classes each.

  • : training and evaluating on CLEAR benchmark (RGB images)

  • : Training and evaluating on CLEAR benchmark (with pre-trained features)

  • : about the utils you can use create a detection benchmark.

  • : this example makes use of the Endless-Continual-Learning-Simulator's derived dataset scenario.

  • : In this example we show a simple way to use the ctrl benchmark.

  • : this example trains on Split CIFAR10 with Naive strategy. In this example each experience has a different task label.

  • : how to use HuggingFace models and datasets within Avalanche for Natural Language Processing.

📝
Classic MNIST benchmarks
SplitCifar100 benchmark
CLEAR benchmark
CLEAR Linear benchmark
Detection Benchmark
Endless CL Simulator
Simple CTRL benchmark
Task-Incremental Learning
HuggingFace integration