Avalanche offers significant support for defining your own benchmarks (instantiation of one scenario with one or multiple datasets) or using "classic" benchmarks already consolidate in the literature.
You can find examples related to the benchmarks here:
โClassic MNIST benchmarks: in this simple example we show all the different ways you can use MNIST with Avalanche.
โSplitCifar100 benchmark: in this example a CIFAR100 is used with its canonical split in 10 experiences, 10 classes each.
โCLEAR benchmark: training and evaluating on CLEAR benchmark (RGB images)
โCLEAR Linear benchmark: Training and evaluating on CLEAR benchmark (with pre-trained features)
โDetection Benchmark: about the utils you can use create a detection benchmark.
โEndless CL Simulator: this example makes use of the Endless-Continual-Learning-Simulator's derived dataset scenario.
โSimple CTRL benchmark: In this example we show a simple way to use the ctrl benchmark.
โTask-Incremental Learning: this example trains on Split CIFAR10 with Naive strategy. In this example each experience has a different task label.