# Benchmarks

*Avalanche* offers significant support for *defining your own benchmarks* (instantiation of one scenario with one or multiple datasets) or using "**classic" benchmarks** already consolidate in the literature.

You can find **examples** related to the benchmarks here:&#x20;

* [Classic MNIST benchmarks](https://github.com/ContinualAI/avalanche/blob/master/examples/all_mnist.py): *in this simple example we show all the different ways you can use MNIST with Avalanche.*
* [SplitCifar100 benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/lamaml_cifar100.py): *in this example a CIFAR100 is used with its canonical split in 10 experiences, 10 classes each.*
* [CLEAR benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/clear.py): *training and evaluating on CLEAR benchmark (RGB images)*
* [CLEAR Linear benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/clear_linear.py): *Training and evaluating on CLEAR benchmark (with pre-trained features)*
* [Detection Benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/detection_examples_utils.py)*: about the utils you can use create a detection benchmark.*
* [Endless CL Simulator](https://github.com/ContinualAI/avalanche/blob/master/examples/endless_cl_sim.py)*: this example makes use of the Endless-Continual-Learning-Simulator's derived dataset scenario.*
* [Simple CTRL benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/simple_ctrl.py): *In this example we show a simple way to use the ctrl benchmark*.&#x20;
* [Task-Incremental Learning](https://github.com/ContinualAI/avalanche/blob/master/examples/task_incremental.py): *this example trains on Split CIFAR10 with Naive strategy. In this example each experience has a different task label.*
