Skip to content

Full Fine-Tuning

About

The d9d.peft.full_tune package allows you to integrate standard fine-tuning into the PEFT workflow. It does not alter the model architecture. Instead, it uses regex patterns to identify specific modules (e.g., Norm layers or specific Heads) and unfreezes their parameters.

This is particularly useful when combined with other PEFT methods via Stacking, allowing for hybrid training strategies (e.g., LoRA on Attention + Full Tune on LayerNorm).

d9d.peft.full_tune

Package for Full Fine-Tuning functionality within the PEFT framework.

FullTune

Bases: PeftMethod[FullTuneConfig]

Implements Full Fine-Tuning as a 'PEFT' method.

Instead of injecting adapters, this method simply identifies existing parameters that match the configuration pattern and marks them for training.

__init__(config)

Constructs a FullTune object.

Parameters:

Name Type Description Default
config FullTuneConfig

Configuration defining the module name patterns to fine-tune.

required

FullTuneConfig

Bases: BaseModel

Configuration for Full Fine-Tuning.

Allows specifying which modules should be fully fine-tuned using regex patterns.

Attributes:

Name Type Description
kind Literal['full_tune']

Discriminator field, always "full_tune".

module_name_pattern Pattern

Regular expression matching module names to unfreeze.