In this report, I'll show you show you can visualize any model's performance with Weights & Biases.

We'll see how to log metrics from vanilla for loops, boosting models (xgboost & lightgbm), sklearn and neural networks.

Try it in this Kaggle kernel →

If you have any questions, we'd love to answer them.

Log a metric

1. Log any metric with Weights and Biases


# Get Apple stock price data from
import pandas as pd
import wandb

# Read in dataset
apple = pd.read_csv("../input/kernel-files/apple.csv")
apple = apple[-1000:]
wandb.init(project="visualize-models", name="a_metric")

# Log the metric
for price in apple['close']:
    wandb.log({"Stock Price": price})

XGBoost and LightGBM

2. Visualize boosting model performance

Start out by importing the experiment tracking library and setting up your free W&B account:


# lightgbm callback
lgb.train(params, X_train, callbacks=[wandb.lightgbm.wandb_callback()])

# xgboost callback
xgb.train(param, xg_train, num_round, watchlist, callbacks=[wandb.xgboost.wandb_callback()])


3. Visualize scikit learn performance

Logging sklearn plots with Weights & Biases is simple.

Step 1: First import wandb and initialize a new run.

import wandb

# load and preprocess dataset
# train a model

Step 2: Visualize individual plots.

# Visualize single plot
wandb.sklearn.plot_confusion_matrix(y_true, y_probas, labels)

Or visualize all plots at once:

# Visualize all classifier plots
wandb.sklearn.plot_classifier(clf, X_train, X_test, y_train, y_test, y_pred, y_probas, labels, model_name='SVC', feature_names=None)

# All regression plots
wandb.sklearn.plot_regressor(reg, X_train, X_test, y_train, y_test,  model_name='Ridge')

# All clustering plots
wandb.sklearn.plot_clusterer(kmeans, X_train, cluster_labels, labels=None, model_name='KMeans')

Neural Network

4. Visualize neural network performance

Start out by installing the experiment tracking library and setting up your free W&B account:


# Add WandbCallback() to the fit function, y_train,  validation_data=(X_test, y_test), epochs=config.epochs,
    callbacks=[WandbCallback(data_type="image", labels=labels)])

Using PyTorch?

Here's how to use W&B to track your model performance, gradients and predictions.

5. Visualize A Hyperparameter Sweep

Running a hyperparameter sweep with Weights & Biases is very easy. There are just 3 simple steps:

  1. Define the sweep: we do this by creating a dictionary or a YAML file that specifies the parameters to search through, the search strategy, the optimization metric et all.

  2. Initialize the sweep: with one line of code we initialize the sweep and pass in the dictionary of sweep configurations: sweep_id = wandb.sweep(sweep_config)

  3. Run the sweep agent: also accomplished with one line of code, we call wandb.agent() and pass the sweep_id to run, along with a function that defines your model architecture and trains it: wandb.agent(sweep_id, function=train)