# Benchmarks

*Avalanche* offers significant support for *defining your own benchmarks* (instantiation of one scenario with one or multiple datasets) or using "**classic" benchmarks** already consolidate in the literature.

You can find **examples** related to the benchmarks here:

* [Classic MNIST benchmarks](https://github.com/ContinualAI/avalanche/blob/master/examples/all_mnist.py): *in this simple example we show all the different ways you can use MNIST with Avalanche.*
* \_\_[CLEAR benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/clear.py): *training and evaluating on CLEAR benchmark (RGB images)*
* [CLEAR Linear benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/clear_linear.py): *Training and evaluating on CLEAR benchmark (with pre-trained features)*
* [Detection Benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/detection_examples_utils.py)*: about the utils you can use create a detection benchmark.*
* [Endless CL Simulator](https://github.com/ContinualAI/avalanche/blob/master/examples/endless_cl_sim.py)*: this example makes use of the Endless-Continual-Learning-Simulator's derived dataset scenario.*
* [Simple CTRL benchmark](https://github.com/ContinualAI/avalanche/blob/master/examples/simple_ctrl.py): *In this example we show a simple way to use the ctrl benchmark*.
* [Hugging Face integration](https://github.com/ContinualAI/avalanche/blob/master/examples/nlp.py)*: how to use Hugging Face models and datasets within Avalanche for Natural Language Processing.*
