arrow-left
All pages
gitbookPowered by GitBook
1 of 1

Loading...

Benchmarks

Benchmarks and DatasetCode Examples

Avalanche offers significant support for defining your own benchmarks (instantiation of one scenario with one or multiple datasets) or using "classic" benchmarks already consolidate in the literature.

You can find examples related to the benchmarks here:

  • Classic MNIST benchmarksarrow-up-right: in this simple example we show all the different ways you can use MNIST with Avalanche.

  • SplitCifar100 benchmarkarrow-up-right: in this example a CIFAR100 is used with its canonical split in 10 experiences, 10 classes each.

  • : training and evaluating on CLEAR benchmark (RGB images)

  • : Training and evaluating on CLEAR benchmark (with pre-trained features)

  • : about the utils you can use create a detection benchmark.

  • : this example makes use of the Endless-Continual-Learning-Simulator's derived dataset scenario.

  • : In this example we show a simple way to use the ctrl benchmark.

  • : this example trains on Split CIFAR10 with Naive strategy. In this example each experience has a different task label.

CLEAR benchmarkarrow-up-right
CLEAR Linear benchmarkarrow-up-right
Detection Benchmarkarrow-up-right
Endless CL Simulatorarrow-up-right
Simple CTRL benchmarkarrow-up-right
Task-Incremental Learningarrow-up-right