Learning Rate Scheduler
Auto Scheduler
For standard PyTorch usage, d9d includes the d9d.loop.auto package. These providers ingest a Pydantic configuration object and manage the creation of standard schedulers.
Supports Piecewise Linear schedules (warmup, hold, decay).
d9d.loop.auto.auto_lr_scheduler
AutoLRSchedulerProvider
Bases: LRSchedulerProvider
LRSchedulerProvider that builds a learning rate scheduler based on a configuration object.
__init__(config)
Constructs the AutoLRSchedulerProvider object.
PiecewiseConfig
Bases: BaseModel
Configuration for the piecewise learning rate scheduler.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
Literal['piecewise']
|
Discriminator tag, must be "piecewise". |
scheduler |
PiecewiseSchedulerConfig
|
Detailed configuration for the piecewise schedule. |
Interface
If you need a custom learning rate scheduler, you implement the LRSchedulerProvider protocol.
d9d.loop.control.lr_scheduler_provider
InitializeLRSchedulerContext
dataclass
Context data required to initialize an LR scheduler.
Attributes:
| Name | Type | Description |
|---|---|---|
dist_context |
DistributedContext
|
The distributed context. |
total_steps |
int
|
The total number of training steps. |
optimizer |
Optimizer
|
The optimizer instance that the scheduler will control. |
LRSchedulerProvider
Bases: Protocol
Protocol for defining how Learning Rate schedulers are created.
__call__(context)
abstractmethod
Initializes the LR scheduler for a specific model pipeline stage.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
InitializeLRSchedulerContext
|
Context for this operation. |
required |
Returns:
| Type | Description |
|---|---|
LRSchedulerProtocol
|
The instantiated LR scheduler adhering to the protocol. |