Skip to content
This repository was archived by the owner on Aug 28, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from 28 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
c1c8168
update optimization
awaelchli Mar 13, 2023
0504638
update inception
awaelchli Mar 13, 2023
da590db
update attention
awaelchli Mar 13, 2023
ec3d50a
incomplete GNN
awaelchli Mar 13, 2023
0b92f1d
update energy models
awaelchli Mar 13, 2023
a5466e4
update deep autoencoders
awaelchli Mar 13, 2023
88722ac
normalizing flows incomplete
awaelchli Mar 13, 2023
47f1cfe
autoregressive
awaelchli Mar 13, 2023
cfab2ee
vit
awaelchli Mar 13, 2023
b4d7be3
meta learning
awaelchli Mar 13, 2023
8ab731c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 13, 2023
18c2430
Apply suggestions from code review
awaelchli Mar 13, 2023
000c803
update gnn
awaelchli Mar 13, 2023
3824605
simclr
awaelchli Mar 13, 2023
d1a93e7
update
awaelchli Mar 13, 2023
6e04b38
Merge branch 'upgrade/course' of github.com:Lightning-AI/tutorials in…
awaelchli Mar 13, 2023
9377298
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 13, 2023
b24b122
Update course_UvA-DL/05-transformers-and-MH-attention/Transformers_MH…
awaelchli Mar 13, 2023
099c50a
update
awaelchli Mar 13, 2023
82f2be6
links
awaelchli Mar 13, 2023
a6a1a17
Merge branch 'main' into upgrade/course
Borda Mar 14, 2023
5733cc0
Merge branch 'main' into upgrade/course
Borda Mar 14, 2023
28accf9
2.0.0rc0
Borda Mar 14, 2023
10615e9
lightning
Borda Mar 14, 2023
217be55
2.0
Borda Mar 14, 2023
1dc811a
Apply suggestions from code review
awaelchli Mar 14, 2023
bb1e40a
docs build fix
awaelchli Mar 14, 2023
835a628
stupid sphinx
awaelchli Mar 14, 2023
94e6725
Merge branch 'main' into upgrade/course
Borda Mar 14, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _requirements/default.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
setuptools==67.4.0
ipython[notebook]>=8.0.0, <8.12.0
torch>=1.8.1, <1.14.0
pytorch-lightning>=1.4, <1.9
pytorch-lightning>=1.4, <2.0.0
torchmetrics>=0.7, <0.12
2 changes: 1 addition & 1 deletion course_UvA-DL/01-introduction-to-pytorch/.meta.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
title: "Tutorial 1: Introduction to PyTorch"
author: Phillip Lippe
created: 2021-08-27
updated: 2023-01-04
updated: 2023-03-14
license: CC BY-SA
description: |
This tutorial will give a short introduction to PyTorch basics, and get you setup for writing your own neural networks.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,18 +25,18 @@
import time

import matplotlib.pyplot as plt

# %matplotlib inline
import matplotlib_inline.backend_inline
import numpy as np
import torch
import torch.nn as nn
import torch.utils.data as data

# %matplotlib inline
from IPython.display import set_matplotlib_formats
from matplotlib.colors import to_rgba
from torch import Tensor
from tqdm.notebook import tqdm # Progress bar

set_matplotlib_formats("svg", "pdf")
matplotlib_inline.backend_inline.set_matplotlib_formats("svg", "pdf") # For export

# %% [markdown]
# ## The Basics of PyTorch
Expand Down Expand Up @@ -185,7 +185,7 @@
print("X2 (after)", x2)

# %% [markdown]
# In-place operations are usually marked with a underscore postfix (e.g. "add_" instead of "add").
# In-place operations are usually marked with a underscore postfix (for example `torch.add_` instead of `torch.add`).
#
# Another common operation aims at changing the shape of a tensor.
# A tensor of size (2,3) can be re-organized to any other shape with the same number of elements (e.g. a tensor of size (6), or (3,2), ...).
Expand Down
2 changes: 1 addition & 1 deletion course_UvA-DL/02-activation-functions/.meta.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
title: "Tutorial 2: Activation Functions"
author: Phillip Lippe
created: 2021-08-27
updated: 2023-01-04
updated: 2023-03-14
license: CC BY-SA
description: |
In this tutorial, we will take a closer look at (popular) activation functions and investigate their effect on optimization properties in neural networks.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@
from urllib.error import HTTPError

import matplotlib.pyplot as plt

# %matplotlib inline
import matplotlib_inline.backend_inline
import numpy as np
import seaborn as sns
import torch
Expand All @@ -19,14 +22,11 @@
import torch.optim as optim
import torch.utils.data as data
import torchvision

# %matplotlib inline
from IPython.display import set_matplotlib_formats
from torchvision import transforms
from torchvision.datasets import FashionMNIST
from tqdm.notebook import tqdm

set_matplotlib_formats("svg", "pdf") # For export
matplotlib_inline.backend_inline.set_matplotlib_formats("svg", "pdf") # For export
sns.set()

# %% [markdown]
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
title: "Tutorial 3: Initialization and Optimization"
author: Phillip Lippe
created: 2021-08-27
updated: 2023-01-04
updated: 2023-03-14
license: CC BY-SA
tags:
- Image
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,27 +13,27 @@
import urllib.request
from urllib.error import HTTPError

import lightning as L
import matplotlib.pyplot as plt

# %matplotlib inline
import matplotlib_inline.backend_inline
import numpy as np
import pytorch_lightning as pl
import seaborn as sns
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data

# %matplotlib inline
from IPython.display import set_matplotlib_formats
from matplotlib import cm
from torchvision import transforms
from torchvision.datasets import FashionMNIST
from tqdm.notebook import tqdm

set_matplotlib_formats("svg", "pdf") # For export
matplotlib_inline.backend_inline.set_matplotlib_formats("svg", "pdf") # For export
sns.set()

# %% [markdown]
# Instead of the `set_seed` function as in Tutorial 3, we can use PyTorch Lightning's build-in function `pl.seed_everything`.
# Instead of the `set_seed` function as in Tutorial 3, we can use Lightning's build-in function `L.seed_everything`.
# We will reuse the path variables `DATASET_PATH` and `CHECKPOINT_PATH` as in Tutorial 3.
# Adjust the paths if necessary.

Expand All @@ -44,7 +44,7 @@
CHECKPOINT_PATH = os.environ.get("PATH_CHECKPOINT", "saved_models/InitOptim/")

# Seed everything
pl.seed_everything(42)
L.seed_everything(42)

# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.deterministic = True
Expand Down Expand Up @@ -937,8 +937,10 @@ def pathological_curve_loss(w1, w2):
def plot_curve(
curve_fn, x_range=(-5, 5), y_range=(-5, 5), plot_3d=False, cmap=cm.viridis, title="Pathological curvature"
):
_ = plt.figure()
ax = plt.axes(projection="3d") if plot_3d else plt.axes()
fig = plt.figure()
ax = fig.gca()
if plot_3d:
ax = fig.add_subplot(projection="3d")

x = torch.arange(x_range[0], x_range[1], (x_range[1] - x_range[0]) / 100.0)
y = torch.arange(y_range[0], y_range[1], (y_range[1] - y_range[0]) / 100.0)
Expand Down
4 changes: 2 additions & 2 deletions course_UvA-DL/04-inception-resnet-densenet/.meta.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
title: "Tutorial 4: Inception, ResNet and DenseNet"
author: Phillip Lippe
created: 2021-08-27
updated: 2023-01-04
updated: 2023-03-14
license: CC BY-SA
tags:
- Image
Expand All @@ -18,6 +18,6 @@ requirements:
- matplotlib
- seaborn
- tabulate
- pytorch-lightning>=1.8
- lightning>=2.0.0rc0
accelerator:
- GPU
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,11 @@
from types import SimpleNamespace
from urllib.error import HTTPError

import lightning as L
import matplotlib
import matplotlib.pyplot as plt
import matplotlib_inline.backend_inline
import numpy as np
import pytorch_lightning as pl
import seaborn as sns
import tabulate
import torch
Expand All @@ -21,13 +22,13 @@
import torchvision

# %matplotlib inline
from IPython.display import HTML, display, set_matplotlib_formats
from IPython.display import HTML, display
from lightning.pytorch.callbacks import LearningRateMonitor, ModelCheckpoint
from PIL import Image
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
from torchvision import transforms
from torchvision.datasets import CIFAR10

set_matplotlib_formats("svg", "pdf") # For export
matplotlib_inline.backend_inline.set_matplotlib_formats("svg", "pdf") # For export
matplotlib.rcParams["lines.linewidth"] = 2.0
sns.reset_orig()

Expand All @@ -46,7 +47,7 @@


# Function for setting the seed
pl.seed_everything(42)
L.seed_everything(42)

# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.deterministic = True
Expand Down Expand Up @@ -136,9 +137,9 @@
# We need to do a little trick because the validation set should not use the augmentation.
train_dataset = CIFAR10(root=DATASET_PATH, train=True, transform=train_transform, download=True)
val_dataset = CIFAR10(root=DATASET_PATH, train=True, transform=test_transform, download=True)
pl.seed_everything(42)
L.seed_everything(42)
train_set, _ = torch.utils.data.random_split(train_dataset, [45000, 5000])
pl.seed_everything(42)
L.seed_everything(42)
_, val_set = torch.utils.data.random_split(val_dataset, [45000, 5000])

# Loading the test set
Expand Down Expand Up @@ -180,7 +181,7 @@
# %% [markdown]
# ## PyTorch Lightning
#
# In this notebook and in many following ones, we will make use of the library [PyTorch Lightning](https://www.pytorchlightning.ai/).
# In this notebook and in many following ones, we will make use of the library [PyTorch Lightning](https://www.lightning.ai/docs/pytorch/stable).
# PyTorch Lightning is a framework that simplifies your code needed to train, evaluate, and test a model in PyTorch.
# It also handles logging into [TensorBoard](https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html), a visualization toolkit for ML experiments, and saving model checkpoints automatically with minimal code overhead from our side.
# This is extremely helpful for us as we want to focus on implementing different model architectures and spend little time on other code overhead.
Expand All @@ -192,12 +193,12 @@

# %%
# Setting the seed
pl.seed_everything(42)
L.seed_everything(42)

# %% [markdown]
# Thus, in the future, we don't have to define our own `set_seed` function anymore.
#
# In PyTorch Lightning, we define `pl.LightningModule`'s (inheriting from `Module`) that organize our code into 5 main sections:
# In PyTorch Lightning, we define `L.LightningModule`'s (inheriting from `Module`) that organize our code into 5 main sections:
#
# 1. Initialization (`__init__`), where we create all necessary parameters/models
# 2. Optimizers (`configure_optimizers`) where we create the optimizers, learning rate scheduler, etc.
Expand All @@ -208,13 +209,13 @@
# 5. Test loop (`test_step`) which is the same as validation, only on a test set.
#
# Therefore, we don't abstract the PyTorch code, but rather organize it and define some default operations that are commonly used.
# If you need to change something else in your training/validation/test loop, there are many possible functions you can overwrite (see the [docs](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html) for details).
# If you need to change something else in your training/validation/test loop, there are many possible functions you can overwrite (see the [docs](https://lightning.ai/docs/pytorch/stable/common/lightning_module.html) for details).
#
# Now we can look at an example of how a Lightning Module for training a CNN looks like:


# %%
class CIFARModule(pl.LightningModule):
class CIFARModule(L.LightningModule):
def __init__(self, model_name, model_hparams, optimizer_name, optimizer_hparams):
"""
Inputs:
Expand Down Expand Up @@ -322,7 +323,7 @@ def create_model(model_name, model_hparams):
# Besides the Lightning module, the second most important module in PyTorch Lightning is the `Trainer`.
# The trainer is responsible to execute the training steps defined in the Lightning module and completes the framework.
# Similar to the Lightning module, you can override any key part that you don't want to be automated, but the default settings are often the best practice to do.
# For a full overview, see the [documentation](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html).
# For a full overview, see the [documentation](https://lightning.ai/docs/pytorch/stable/common/trainer.html).
# The most important functions we use below are:
#
# * `trainer.fit`: Takes as input a lightning module, a training dataset, and an (optional) validation dataset.
Expand All @@ -345,10 +346,10 @@ def train_model(model_name, save_name=None, **kwargs):
save_name = model_name

# Create a PyTorch Lightning trainer with the generation callback
trainer = pl.Trainer(
trainer = L.Trainer(
default_root_dir=os.path.join(CHECKPOINT_PATH, save_name), # Where to save models
# We run on a single GPU (if possible)
accelerator="gpu" if str(device).startswith("cuda") else "cpu",
accelerator="auto",
devices=1,
# How many epochs to train for if no patience is set
max_epochs=180,
Expand All @@ -358,7 +359,6 @@ def train_model(model_name, save_name=None, **kwargs):
), # Save the best checkpoint based on the maximum val_acc recorded. Saves only weights and not optimizer
LearningRateMonitor("epoch"),
], # Log learning rate every epoch
enable_progress_bar=True,
) # In case your notebook crashes due to the progress bar, consider increasing the refresh rate
trainer.logger._log_graph = True # If True, we plot the computation graph in tensorboard
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
Expand All @@ -370,7 +370,7 @@ def train_model(model_name, save_name=None, **kwargs):
# Automatically loads the model with the saved hyperparameters
model = CIFARModule.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42) # To be reproducable
L.seed_everything(42) # To be reproducable
model = CIFARModule(model_name=model_name, **kwargs)
trainer.fit(model, train_loader, val_loader)
model = CIFARModule.load_from_checkpoint(
Expand Down
4 changes: 2 additions & 2 deletions course_UvA-DL/05-transformers-and-MH-attention/.meta.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
title: "Tutorial 5: Transformers and Multi-Head Attention"
author: Phillip Lippe
created: 2021-06-30
updated: 2023-01-04
updated: 2023-03-14
license: CC BY-SA
build: 0
tags:
Expand All @@ -19,6 +19,6 @@ requirements:
- torchvision
- matplotlib
- seaborn
- pytorch-lightning>=1.8
- lightning>=2.0.0rc0
accelerator:
- GPU
Loading