Skip to content

Conversation

@BenjaminBossan
Copy link
Member

Description

There is a lot of code duplication in the way that adapter layers are initialized. This means that if, say, a new LoRA config argument is added, we need updates in the following places:

  1. LoraModel._create_and_replace: kwargs dict
  2. LoraModel._create_and_replace: update_layer arguments
  3. LoraModel._create_and_replace: update_layer signature
  4. Linear.__init__: signature (same for each LoRA layer type)
  5. Linear.__init__: update_layer arguments
  6. resolve_lora_variant: signature if the argument is tied to a LoRA variant

It is far too easy to forget something and it's in fact a common mistake we see, especially by outside contributors.

This PR aims at removing this duplication as much as possible. The crux is that almost all of these arguments derive from the LoraConfig. Therefore, the duplication can be reduced by passing the LoraConfig directly instead of each of its individual arguments.

As is, this change is only implemented by LoRA, which is by far the most affected by this duplication. However, an argument could be made that for consistency, the other PEFT methods should be refactored accordingly. Since AdaLoRA inherits from LoRA, it was also adapted.

Possible counter points

With this change, the used arguments are not listed explicitly anymore. But since the LoraConfig is a typed dataclass, this replacement does not stop us from knowing which arguments are available (unlike, say, using untyped kwargs). So for editing support and readability, this replacement should not hurt.

Another possible issue with this proposal is that it could easily break some functionality, as it can be easy to overlook something. Not everything is covered by tests (some of the more rarely used quantized layer types, TP layer), so it's possible to miss this.

There is a lot of code duplication in the way that adapter layers are
initialized. This means that if, say, a new LoRA config argument is
added, we need updates in the following places:

1. LoraModel._create_and_replace: kwargs dict
2. LoraModel._create_and_replace: update_layer arguments
3. LoraModel._create_and_replace: update_layer signature
4. Linear.__init__: signature (same for each LoRA layer type)
5. Linear.__init__: update_layer arguments
6. resolve_lora_variant: signature if the argument is tied to a LoRA
   variant

It is far too easy to forget something and it's in fact a common mistake
we see, especially by outside contributors.

This PR aims at removing this duplication as much as possible. The crux
is that almost all of these arguments derive from the LoraConfig.
Therefore, the duplication can be reduced by passing the LoraConfig
directly instead of each of its individual arguments.

As is, this change is only implemented by LoRA, which is by far the most
affected by this duplication. However, an argument could be made that
for consistency, the other PEFT methods should be refactored
accordingly.

Possible counter points:

With this change, the used arguments are not listed explicitly anymore.
But since the LoraConfig is a typed dataclass, this replacement does not
stop us from knowing which arguments are available (unlike, say, using
untyped kwargs). So for editing support and readability, this
replacement should not hurt.

Another possible issue with this proposal is that it could easily break
some functionality, as it can be easy to overlook something. Not
everything is covered by tests (some of the more rarely used quantized
layer types, TP layer), so it's possible to miss this.
Comment on lines +153 to +157
lora_dropout = config.lora_dropout
init_lora_weights = config.init_lora_weights
use_rslora = config.use_rslora
lora_bias = config.lora_bias
inference_mode = config.inference_mode
Copy link
Member Author

@BenjaminBossan BenjaminBossan Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: I added variables like these to reduce diff noise. LMK if you'd rather remove those and just use config.lora_dropout etc. below.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Comment on lines +147 to +148
r: int,
lora_alpha: int,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: r and alpha can differ from the one in the config (e.g. because of rank_pattern), thus they are not taken from the config.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants