공부

GradualWarmupScheduler

내공얌냠 2023. 5. 18. 17:21

설명

일정 에폭에 도달하기 전까지 learning rate을 점차적으로 증가시킨다.

일정 에폭에 도달하면, 정해놓은 learning rate policy에 따라 동작하도록 한다.

 

설치

!pip install warmup-scheduler

사용

import torch
from torch.optim.lr_scheduler import StepLR, ExponentialLR
from torch.optim.sgd import SGD

from warmup_scheduler import GradualWarmupScheduler


if __name__ == '__main__':
    model = [torch.nn.Parameter(torch.randn(2, 2, requires_grad=True))]
    optim = SGD(model, 0.1)

    # scheduler_warmup is chained with schduler_steplr
    scheduler_steplr = StepLR(optim, step_size=10, gamma=0.1)
    scheduler_warmup = GradualWarmupScheduler(optim, multiplier=1, total_epoch=5, after_scheduler=scheduler_steplr)

    # this zero gradient update is needed to avoid a warning message, issue #8.
    optim.zero_grad()
    optim.step()

    for epoch in range(1, 20):
        scheduler_warmup.step(epoch)
        print(epoch, optim.param_groups[0]['lr'])

        optim.step()    # backward pass (update network)

 

References

https://github.com/ildoonet/pytorch-gradual-warmup-lr

728x90
반응형