-
Notifications
You must be signed in to change notification settings - Fork 2.1k
[WIP] Refactor layer initialization #2960
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[WIP] Refactor layer initialization #2960
Conversation
There is a lot of code duplication in the way that adapter layers are initialized. This means that if, say, a new LoRA config argument is added, we need updates in the following places: 1. LoraModel._create_and_replace: kwargs dict 2. LoraModel._create_and_replace: update_layer arguments 3. LoraModel._create_and_replace: update_layer signature 4. Linear.__init__: signature (same for each LoRA layer type) 5. Linear.__init__: update_layer arguments 6. resolve_lora_variant: signature if the argument is tied to a LoRA variant It is far too easy to forget something and it's in fact a common mistake we see, especially by outside contributors. This PR aims at removing this duplication as much as possible. The crux is that almost all of these arguments derive from the LoraConfig. Therefore, the duplication can be reduced by passing the LoraConfig directly instead of each of its individual arguments. As is, this change is only implemented by LoRA, which is by far the most affected by this duplication. However, an argument could be made that for consistency, the other PEFT methods should be refactored accordingly. Possible counter points: With this change, the used arguments are not listed explicitly anymore. But since the LoraConfig is a typed dataclass, this replacement does not stop us from knowing which arguments are available (unlike, say, using untyped kwargs). So for editing support and readability, this replacement should not hurt. Another possible issue with this proposal is that it could easily break some functionality, as it can be easy to overlook something. Not everything is covered by tests (some of the more rarely used quantized layer types, TP layer), so it's possible to miss this.
| lora_dropout = config.lora_dropout | ||
| init_lora_weights = config.init_lora_weights | ||
| use_rslora = config.use_rslora | ||
| lora_bias = config.lora_bias | ||
| inference_mode = config.inference_mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: I added variables like these to reduce diff noise. LMK if you'd rather remove those and just use config.lora_dropout etc. below.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
| r: int, | ||
| lora_alpha: int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: r and alpha can differ from the one in the config (e.g. because of rank_pattern), thus they are not taken from the config.
Description
There is a lot of code duplication in the way that adapter layers are initialized. This means that if, say, a new LoRA config argument is added, we need updates in the following places:
LoraModel._create_and_replace:kwargsdictLoraModel._create_and_replace: update_layer argumentsLoraModel._create_and_replace: update_layer signatureLinear.__init__: signature (same for each LoRA layer type)Linear.__init__: update_layer argumentsresolve_lora_variant: signature if the argument is tied to a LoRA variantIt is far too easy to forget something and it's in fact a common mistake we see, especially by outside contributors.
This PR aims at removing this duplication as much as possible. The crux is that almost all of these arguments derive from the
LoraConfig. Therefore, the duplication can be reduced by passing theLoraConfigdirectly instead of each of its individual arguments.As is, this change is only implemented by LoRA, which is by far the most affected by this duplication. However, an argument could be made that for consistency, the other PEFT methods should be refactored accordingly. Since AdaLoRA inherits from LoRA, it was also adapted.
Possible counter points
With this change, the used arguments are not listed explicitly anymore. But since the
LoraConfigis a typeddataclass, this replacement does not stop us from knowing which arguments are available (unlike, say, using untypedkwargs). So for editing support and readability, this replacement should not hurt.Another possible issue with this proposal is that it could easily break some functionality, as it can be easy to overlook something. Not everything is covered by tests (some of the more rarely used quantized layer types, TP layer), so it's possible to miss this.