Benchmarks
Benchmarks and DatasetCode Examples
Avalanche offers significant support for defining your own benchmarks (instantiation of one scenario with one or multiple datasets) or using "classic" benchmarks already consolidate in the literature.
You can find examples related to the benchmarks here:
Classic MNIST benchmarks: in this simple example we show all the different ways you can use MNIST with Avalanche.
SplitCifar100 benchmark: in this example a CIFAR100 is used with its canonical split in 10 experiences, 10 classes each.
CLEAR benchmark: training and evaluating on CLEAR benchmark (RGB images)
CLEAR Linear benchmark: Training and evaluating on CLEAR benchmark (with pre-trained features)
Detection Benchmark: about the utils you can use create a detection benchmark.
Endless CL Simulator: this example makes use of the Endless-Continual-Learning-Simulator's derived dataset scenario.
Simple CTRL benchmark: In this example we show a simple way to use the ctrl benchmark.
Task-Incremental Learning: this example trains on Split CIFAR10 with Naive strategy. In this example each experience has a different task label.
Last updated