forked from huggingface/diffusers
-
Notifications
You must be signed in to change notification settings - Fork 0
main merge #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
main merge #4
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
add doc Co-authored-by: yiyixuxu <yixu310@gmail,com>
huggingface#6400) * add: test to check if peft loras are loadable in non-peft envs. * add torch_device approrpiately. * fix: get_dummy_inputs(). * test logits. * rename * debug * debug * fix: generator * new assertion values after fixing the seed. * shape * remove print statements and settle this. * to update values. * change values when lora config is initialized under a fixed seed. * update colab link * update notebook link * sanity restored by getting the exact same values without peft.
…a stuff. (huggingface#6426) * handle rest of the stuff related to deprecated lora stuff. * fix: copies * don't modify the uNet in-place. * fix: temporal autoencoder. * manually remove lora layers. * don't copy unet. * alright * remove lora attn processors from unet3d * fix: unet3d. * styl * Empty-Commit
Update README_sdxl.md
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using. * Update src/diffusers/models/unet_2d_blocks.py This changes suggest by maintener. Co-authored-by: Sayak Paul <[email protected]> * Update src/diffusers/models/unet_2d_blocks.py Add suggested text Co-authored-by: Sayak Paul <[email protected]> * Update unet_2d_blocks.py I changed the Parameter to Args text. * Update unet_2d_blocks.py proper indentation set in this file. * Update unet_2d_blocks.py a little bit of change in the act_fun argument line. * I run the black command to reformat style in the code * Update unet_2d_blocks.py similar doc-string add to have in the original diffusion repository. * Batter way to write binarize function * Solve check_code_quality error * My mistake to run pull request but not reformated file * Update image_processor.py * remove extra variable and space * Update image_processor.py * Run ruff libarary to reformat my file --------- Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: YiYi Xu <[email protected]>
…#6437) * disable running peft non-peft lora test in the peft env. * Empty-Commit
This reverts commit fb4aec0.
This reverts commit 7715e6c.
…changes. (huggingface#6448) * debug * debug test_with_different_scales_fusion_equivalence * use the right method. * place it right. * let's see. * let's see again * alright then. * add a comment.
* Respect offline mode when loading model * default to local entry if connectionerror
fix local links
* Make WDS pipeline interpolation type configurable. * Make the VAE encoding batch size configurable. * Make lora_alpha and lora_dropout configurable for LCM LoRA scripts. * Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable. * Make LoRA target modules configurable for LCM-LoRA scripts. * Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script. * apply suggestions from review
…uggingface#6390) * add documentation for DeepCache * fix typo * add wandb url for DeepCache * fix some typos * add item in _toctree.yml * update formats for arguments * Update deepcache.md * Update docs/source/en/optimization/deepcache.md Co-authored-by: Sayak Paul <[email protected]> * add StableDiffusionXLPipeline in doc * Separate SDPipeline and SDXLPipeline * Add the paper link of ablation experiments for hyper-parameters * Apply suggestions from code review Co-authored-by: Steven Liu <[email protected]> --------- Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: Steven Liu <[email protected]>
* Intel Gen 4 Xeon and later support bf16 * fix bf16 notes
…ed dreambooth lora sdxl script (huggingface#6464) * unwrap text encoder when saving hook only for full text encoder tuning * unwrap text encoder when saving hook only for full text encoder tuning * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * Update examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py Co-authored-by: Sayak Paul <[email protected]> --------- Co-authored-by: Sayak Paul <[email protected]>
* edebug * debug * more debug * more more debug * remove tests for LoRAAttnProcessors. * rename
* null-text-inversion-implementation * edited * edited * edited * edited * edited * edit * makestyle --------- Co-authored-by: Sayak Paul <[email protected]>
* post release * style --------- Co-authored-by: Patrick von Platen <[email protected]>
* introduce unload_lora. * fix-copies
* add: experimental script for diffusion dpo training. * random_crop cli. * fix: caption tokenization. * fix: pixel_values index. * fix: grad? * debug * fix: reduction. * fixes in the loss calculation. * style * fix: unwrap call. * fix: validation inference. * add: initial sdxl script * debug * make sure images in the tuple are of same res * fix model_max_length * report print * boom * fix: numerical issues. * fix: resolution * comment about resize. * change the order of the training transformation. * save call. * debug * remove print * manually detaching necessary? * use the same vae for validation. * add: readme.
* init works * add gluegen pipeline * add gluegen code * add another way to load language adapter * make style * Update README.md * change doc
…d modules in pipelines. (huggingface#6436) update
* introduce integrations module. * remove duplicate methods. * better imports. * move to loaders.py * remove peftadaptermixin from modelmixin. * add: peftadaptermixin selectively. * add: entry to _toctree * Empty-Commit
minor changes
add Co-authored-by: yiyixuxu <yixu310@gmail,com>
…or preservation loss (huggingface#6968) * fix bug in micro-conditioning of class images * fix bug in micro-conditioning of class images * style
…ggingface#6946) * feat: allow low_cpu_mem_usage in ip adapter loading * reduce the number of device placements. * documentation. * throw low_cpu_mem_usage warning only once from the main entry point.
* Bugfix: correct import for diffusers * Fix: Prompt2Prompt example * Format style --------- Co-authored-by: YiYi Xu <[email protected]>
* Fix words * Fix --------- Co-authored-by: YiYi Xu <[email protected]>
* standardize model card * fix tags * correct import styling and update tags * run make style and make quality --------- Co-authored-by: Sayak Paul <[email protected]>
* Update textual_inversion.py * Apply suggestions from code review * Update textual_inversion.py * Update textual_inversion.py * Update textual_inversion.py * Update textual_inversion.py * Update examples/textual_inversion/textual_inversion.py Co-authored-by: Sayak Paul <[email protected]> * Update textual_inversion.py * styling --------- Co-authored-by: Sayak Paul <[email protected]>
Update ip_adapter.md
* first draft * fix path * fix path * i2vgen-xl * review * modelscopet2v * feedback
…pelines (huggingface#6951) copy docstring for `strength` from stablediffusion img2img pipeline to controlnet img2img pipelines
…ditionModel (huggingface#6663) * Fixed typos in __init__ and in forward of Unet3DConditionModel * Resolving conflicts --------- Co-authored-by: YiYi Xu <[email protected]>
* fix * update docstring --------- Co-authored-by: yiyixuxu <yixu310@gmail,com>
… in PyTorch 2.2 (huggingface#7008) Fixed deprecation warning for torch.utils._pytree._register_pytree_node in PyTorch 2.2 Co-authored-by: Yinghua <[email protected]>
…e#6995) * make text encoder component truly optional. * more fixes * Apply suggestions from code review Co-authored-by: YiYi Xu <[email protected]> --------- Co-authored-by: YiYi Xu <[email protected]>
* Add attention masking to attn processors * Update tensor conversion --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
* update * update
huggingface#6994) fix Co-authored-by: yiyixuxu <yixu310@gmail,com>
* update * update * update * update * update * update * update * update * update * update
…L) by IPEX on CPU (huggingface#6683) * add stable_diffusion_xl_ipex community pipeline * make style for code quality check * update docs as suggested --------- Co-authored-by: Patrick von Platen <[email protected]>
* update * update * update
…ggingface#6941) * add ip-adapter support * support ip image embeds --------- Co-authored-by: Sayak Paul <[email protected]>
…tPipeline (huggingface#7031) * support ip adapter loading * fix style
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.