Avalanche
GitHubAPI DocCL-BaselinesAvalanche-RLOnline CL Repo
Avalanche - v0.1.0
Avalanche - v0.1.0
  • Avalanche: an End-to-End Library for Continual Learning
  • 📌Getting Started
    • Introduction
    • Current Release
    • How to Install
    • Learn Avalanche in 5 Minutes
  • 📙From Zero to Hero Tutorial
    • Introduction
    • Models
    • Benchmarks
    • Training
    • Evaluation
    • Loggers
    • Putting All Together
    • Extending Avalanche
    • Contribute to Avalanche
  • How-Tos
    • AvalancheDataset
      • Preamble: PyTorch Datasets
      • Creating AvalancheDatasets
      • Advanced Transformations
    • Dataloaders, Buffers, and Replay
  • 📝Examples
    • Models
    • Benchmarks
    • Training
    • Evaluation
    • Loggers
  • 💻Code Documentation
    • Avalanche API
  • How to Contribute
    • Guidelines
  • ❓Questions and Issues
    • Ask Your Question
    • Add Your Issue
    • Request a Feature
    • Give Feedback
  • 👪About Us
    • The People
    • Join Us!
    • Slack
    • Email
    • Twitter
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Examples

Loggers

Examples for the Loggers module offered in Avalanche

# --- CONFIG
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# ---------

# --- TRANSFORMATIONS
train_transform = transforms.Compose([
    RandomCrop(28, padding=4),
    ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))
])
test_transform = transforms.Compose([
    ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))
])
# ---------

# --- SCENARIO CREATION
mnist_train = MNIST('./data/mnist', train=True,
                    download=True, transform=train_transform)
mnist_test = MNIST('./data/mnist', train=False,
                   download=True, transform=test_transform)
scenario = nc_scenario(
    mnist_train, mnist_test, 5, task_labels=False, seed=1234)
# ---------

# MODEL CREATION
model = SimpleMLP(num_classes=scenario.n_classes)

# DEFINE THE EVALUATION PLUGIN AND LOGGER
# The evaluation plugin manages the metrics computation.
# It takes as argument a list of metrics and a list of loggers.
# The evaluation plugin calls the loggers to serialize the metrics
# and save them in persistent memory or print them in the standard output.

# log to Tensorboard
tb_logger = TensorboardLogger()

# log to text file
text_logger = TextLogger(open('log.txt', 'a'))

# print to stdout
interactive_logger = InteractiveLogger()

eval_plugin = EvaluationPlugin(
    accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
    loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
    timing_metrics(epoch=True, epoch_running=True),
    cpu_usage_metrics(experience=True),
    ExperienceForgetting(),
    StreamConfusionMatrix(num_classes=scenario.n_classes, save_image=False),
    disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
    loggers=[interactive_logger, text_logger, tb_logger])

# CREATE THE STRATEGY INSTANCE (NAIVE)
cl_strategy = Naive(
    model, SGD(model.parameters(), lr=0.001, momentum=0.9),
    CrossEntropyLoss(), train_mb_size=500, train_epochs=1, test_mb_size=100,
    device=device, evaluator=eval_plugin)

# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in scenario.train_stream:
    print("Start of experience: ", experience.current_experience)
    print("Current Classes: ", experience.classes_in_this_experience)

    # train returns a dictionary which contains all the metric values
    res = cl_strategy.train(experience, num_workers=4)
    print('Training completed')

    print('Computing accuracy on the whole test set')
    # test also returns a dictionary which contains all the metric values
    results.append(cl_strategy.eval(scenario.test_stream, num_workers=4))

🤝 Run it on Google Colab

You can run this chapter and play with it on Google Colaboratory:

Notebook currently unavailable.

PreviousEvaluationNextGuidelines

Last updated 3 years ago

Was this helpful?

📝