Posts by Tags

Polyak step size

adam

Decay No More

less than 1 minute read

Published:

I wrote a blog post which got published at the ICLR blog post track 2023. The post is titled Decay No More and explains the details of AdamW and its weight decay mechanism. Check it out here.

adamw

How to jointly tune learning rate and weight decay for AdamW

14 minute read

Published:

TL;DR: AdamW is often considered a method that decouples weight decay and learning rate. In this blog post, we show that this is not true for the specific way AdamW is implemented in Pytorch. We also show how to adapt the tuning strategy in order to fix this: when doubling the learning rate, the weight decay should be halved.

datasets

A Bibliography Database for Machine Learning

2 minute read

Published:

Getting the correct bibtex entry for a conference paper (e.g. published at NeurIPS, ICML, ICLR) is annoyingly hard: if you search for the title, you will often find a link to arxiv or to the pdf file, but not to the conference website that contains the bibtex.

machine learning

Decay No More

less than 1 minute read

Published:

I wrote a blog post which got published at the ICLR blog post track 2023. The post is titled Decay No More and explains the details of AdamW and its weight decay mechanism. Check it out here.

open-source

A collection of resources for creating open-source software packages

7 minute read

Published:

Making your research code open-source, tested and documented is quite simple nowadays. This post gives an overview of the most important steps and collects useful ressources, e.g. tutorials for Readthedocs, Sphinx (Gallery) and unit testing in Python.

optimization

Decay No More

less than 1 minute read

Published:

I wrote a blog post which got published at the ICLR blog post track 2023. The post is titled Decay No More and explains the details of AdamW and its weight decay mechanism. Check it out here.

python

software

A collection of resources for creating open-source software packages

7 minute read

Published:

Making your research code open-source, tested and documented is quite simple nowadays. This post gives an overview of the most important steps and collects useful ressources, e.g. tutorials for Readthedocs, Sphinx (Gallery) and unit testing in Python.

weight decay

How to jointly tune learning rate and weight decay for AdamW

14 minute read

Published:

TL;DR: AdamW is often considered a method that decouples weight decay and learning rate. In this blog post, we show that this is not true for the specific way AdamW is implemented in Pytorch. We also show how to adapt the tuning strategy in order to fix this: when doubling the learning rate, the weight decay should be halved.