About
The d9d.peft.full_tune package allows you to integrate standard fine-tuning into the PEFT workflow. It does not alter the model architecture. Instead, it uses regex patterns to identify specific modules (e.g., Norm layers or specific Heads) and unfreezes their parameters.
This is particularly useful when combined with other PEFT methods via Stacking, allowing for hybrid training strategies (e.g., LoRA on Attention + Full Tune on LayerNorm).
d9d.peft.full_tune
Package for Full Fine-Tuning functionality within the PEFT framework.
FullTune
Bases: PeftMethod[FullTuneConfig]
Implements Full Fine-Tuning as a 'PEFT' method.
Instead of injecting adapters, this method simply identifies existing parameters that match the configuration pattern and marks them for training.
Source code in d9d/peft/full_tune/method.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | |
__init__(config)
Constructs a FullTune object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
FullTuneConfig
|
Configuration defining the module name patterns to fine-tune. |
required |
Source code in d9d/peft/full_tune/method.py
17 18 19 20 21 22 23 24 25 | |
FullTuneConfig
Bases: BaseModel
Configuration for Full Fine-Tuning.
Allows specifying which modules should be fully fine-tuned using regex patterns.
Attributes:
| Name | Type | Description |
|---|---|---|
kind |
Literal['full_tune']
|
Discriminator field, always "full_tune". |
module_name_pattern |
Pattern
|
Regular expression matching module names to unfreeze. |
Source code in d9d/peft/full_tune/config.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 | |
```