diff --git a/.github/ISSUE_TEMPLATE/bug-report.yml b/.github/ISSUE_TEMPLATE/bug-report.yml index b865f6c33d51..a0517725284e 100644 --- a/.github/ISSUE_TEMPLATE/bug-report.yml +++ b/.github/ISSUE_TEMPLATE/bug-report.yml @@ -57,50 +57,54 @@ body: description: | Your issue will be replied to more quickly if you can figure out the right person to tag with @. If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. - + All issues are read by one of the core maintainers, so if you don't know who to tag, just leave this blank and a core maintainer will ping the right person. - + Please tag a maximum of 2 people. - Questions on DiffusionPipeline (Saving, Loading, From pretrained, ...): + Questions on DiffusionPipeline (Saving, Loading, From pretrained, ...): @sayakpaul @DN6 Questions on pipelines: - - Stable Diffusion @yiyixuxu @DN6 @sayakpaul - - Stable Diffusion XL @yiyixuxu @sayakpaul @DN6 - - Kandinsky @yiyixuxu - - ControlNet @sayakpaul @yiyixuxu @DN6 - - T2I Adapter @sayakpaul @yiyixuxu @DN6 - - IF @DN6 - - Text-to-Video / Video-to-Video @DN6 @sayakpaul - - Wuerstchen @DN6 + - Stable Diffusion @yiyixuxu @asomoza + - Stable Diffusion XL @yiyixuxu @sayakpaul @DN6 + - Stable Diffusion 3: @yiyixuxu @sayakpaul @DN6 @asomoza + - Kandinsky @yiyixuxu + - ControlNet @sayakpaul @yiyixuxu @DN6 + - T2I Adapter @sayakpaul @yiyixuxu @DN6 + - IF @DN6 + - Text-to-Video / Video-to-Video @DN6 @a-r-r-o-w + - Wuerstchen @DN6 - Other: @yiyixuxu @DN6 + - Improving generation quality: @asomoza Questions on models: - - UNet @DN6 @yiyixuxu @sayakpaul - - VAE @sayakpaul @DN6 @yiyixuxu - - Transformers/Attention @DN6 @yiyixuxu @sayakpaul @DN6 + - UNet @DN6 @yiyixuxu @sayakpaul + - VAE @sayakpaul @DN6 @yiyixuxu + - Transformers/Attention @DN6 @yiyixuxu @sayakpaul + + Questions on single file checkpoints: @DN6 - Questions on Schedulers: @yiyixuxu + Questions on Schedulers: @yiyixuxu - Questions on LoRA: @sayakpaul + Questions on LoRA: @sayakpaul - Questions on Textual Inversion: @sayakpaul + Questions on Textual Inversion: @sayakpaul - Questions on Training: - - DreamBooth @sayakpaul - - Text-to-Image Fine-tuning @sayakpaul - - Textual Inversion @sayakpaul - - ControlNet @sayakpaul + Questions on Training: + - DreamBooth @sayakpaul + - Text-to-Image Fine-tuning @sayakpaul + - Textual Inversion @sayakpaul + - ControlNet @sayakpaul - Questions on Tests: @DN6 @sayakpaul @yiyixuxu + Questions on Tests: @DN6 @sayakpaul @yiyixuxu Questions on Documentation: @stevhliu Questions on JAX- and MPS-related things: @pcuenca - Questions on audio pipelines: @DN6 - + Questions on audio pipelines: @sanchit-gandhi + + - placeholder: "@Username ..." diff --git a/.github/ISSUE_TEMPLATE/remote-vae-pilot-feedback.yml b/.github/ISSUE_TEMPLATE/remote-vae-pilot-feedback.yml new file mode 100644 index 000000000000..c94d3bed9738 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/remote-vae-pilot-feedback.yml @@ -0,0 +1,38 @@ +name: "\U0001F31F Remote VAE" +description: Feedback for remote VAE pilot +labels: [ "Remote VAE" ] + +body: + - type: textarea + id: positive + validations: + required: true + attributes: + label: Did you like the remote VAE solution? + description: | + If you liked it, we would appreciate it if you could elaborate what you liked. + + - type: textarea + id: feedback + validations: + required: true + attributes: + label: What can be improved about the current solution? + description: | + Let us know the things you would like to see improved. Note that we will work optimizing the solution once the pilot is over and we have usage. + + - type: textarea + id: others + validations: + required: true + attributes: + label: What other VAEs you would like to see if the pilot goes well? + description: | + Provide a list of the VAEs you would like to see in the future if the pilot goes well. + + - type: textarea + id: additional-info + attributes: + label: Notify the members of the team + description: | + Tag the following folks when submitting this feedback: @hlky @sayakpaul diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index a0337eaaaac5..e4b2b45a4ecd 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -38,9 +38,9 @@ members/contributors who may be interested in your PR. Core library: -- Schedulers: @yiyixuxu -- Pipelines: @sayakpaul @yiyixuxu @DN6 -- Training examples: @sayakpaul +- Schedulers: @yiyixuxu +- Pipelines and pipeline callbacks: @yiyixuxu and @asomoza +- Training examples: @sayakpaul - Docs: @stevhliu and @sayakpaul - JAX and MPS: @pcuenca - Audio: @sanchit-gandhi @@ -48,7 +48,8 @@ Core library: Integrations: -- deepspeed: HF Trainer/Accelerate: @pacman100 +- deepspeed: HF Trainer/Accelerate: @SunMarc +- PEFT: @sayakpaul @BenjaminBossan HF projects: diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml index 718c67731bd5..cc97e043c139 100644 --- a/.github/workflows/benchmark.yml +++ b/.github/workflows/benchmark.yml @@ -7,20 +7,25 @@ on: env: DIFFUSERS_IS_CI: yes + HF_HUB_ENABLE_HF_TRANSFER: 1 HF_HOME: /mnt/cache OMP_NUM_THREADS: 8 MKL_NUM_THREADS: 8 + BASE_PATH: benchmark_outputs jobs: - torch_pipelines_cuda_benchmark_tests: - name: Torch Core Pipelines CUDA Benchmarking Tests + torch_models_cuda_benchmark_tests: + env: + SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_BENCHMARK }} + name: Torch Core Models CUDA Benchmarking Tests strategy: fail-fast: false max-parallel: 1 - runs-on: [single-gpu, nvidia-gpu, a10, ci] + runs-on: + group: aws-g6e-4xlarge container: image: diffusers/diffusers-pytorch-cuda - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 + options: --shm-size "16gb" --ipc host --gpus all steps: - name: Checkout diffusers uses: actions/checkout@v3 @@ -31,23 +36,54 @@ jobs: nvidia-smi - name: Install dependencies run: | + apt update + apt install -y libpq-dev postgresql-client python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" python -m uv pip install -e [quality,test] - python -m uv pip install pandas peft + python -m uv pip install -r benchmarks/requirements.txt - name: Environment run: | python utils/print_env.py - name: Diffusers Benchmarking env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.DIFFUSERS_BOT_TOKEN }} - BASE_PATH: benchmark_outputs + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} run: | - export TOTAL_GPU_MEMORY=$(python -c "import torch; print(torch.cuda.get_device_properties(0).total_memory / (1024**3))") - cd benchmarks && mkdir ${BASE_PATH} && python run_all.py && python push_results.py + cd benchmarks && python run_all.py + + - name: Push results to the Hub + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_BOT_TOKEN }} + run: | + cd benchmarks && python push_results.py + mkdir $BASE_PATH && cp *.csv $BASE_PATH - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: benchmark_test_reports - path: benchmarks/benchmark_outputs \ No newline at end of file + path: benchmarks/${{ env.BASE_PATH }} + + # TODO: enable this once the connection problem has been resolved. + - name: Update benchmarking results to DB + env: + PGDATABASE: metrics + PGHOST: ${{ secrets.DIFFUSERS_BENCHMARKS_PGHOST }} + PGUSER: transformers_benchmarks + PGPASSWORD: ${{ secrets.DIFFUSERS_BENCHMARKS_PGPASSWORD }} + BRANCH_NAME: ${{ github.head_ref || github.ref_name }} + run: | + git config --global --add safe.directory /__w/diffusers/diffusers + commit_id=$GITHUB_SHA + commit_msg=$(git show -s --format=%s "$commit_id" | cut -c1-70) + cd benchmarks && python populate_into_db.py "$BRANCH_NAME" "$commit_id" "$commit_msg" + + - name: Report success status + if: ${{ success() }} + run: | + pip install requests && python utils/notify_benchmarking_status.py --status=success + + - name: Report failure status + if: ${{ failure() }} + run: | + pip install requests && python utils/notify_benchmarking_status.py --status=failure \ No newline at end of file diff --git a/.github/workflows/build_docker_images.yml b/.github/workflows/build_docker_images.yml index 82ef885b240e..583853c6d649 100644 --- a/.github/workflows/build_docker_images.yml +++ b/.github/workflows/build_docker_images.yml @@ -20,26 +20,34 @@ env: jobs: test-build-docker-images: - runs-on: [ self-hosted, intel-cpu, 8-cpu, ci ] + runs-on: + group: aws-general-8-plus if: github.event_name == 'pull_request' steps: - name: Set up Docker Buildx uses: docker/setup-buildx-action@v1 - + - name: Check out code uses: actions/checkout@v3 - + - name: Find Changed Dockerfiles id: file_changes uses: jitterbit/get-changed-files@v1 with: - format: 'space-delimited' + format: "space-delimited" token: ${{ secrets.GITHUB_TOKEN }} - + - name: Build Changed Docker Images + env: + CHANGED_FILES: ${{ steps.file_changes.outputs.all }} run: | - CHANGED_FILES="${{ steps.file_changes.outputs.all }}" - for FILE in $CHANGED_FILES; do + echo "$CHANGED_FILES" + for FILE in $CHANGED_FILES; do + # skip anything that isn't still on disk + if [[ ! -f "$FILE" ]]; then + echo "Skipping removed file $FILE" + continue + fi if [[ "$FILE" == docker/*Dockerfile ]]; then DOCKER_PATH="${FILE%/Dockerfile}" DOCKER_TAG=$(basename "$DOCKER_PATH") @@ -50,9 +58,10 @@ jobs: if: steps.file_changes.outputs.all != '' build-and-push-docker-images: - runs-on: [ self-hosted, intel-cpu, 8-cpu, ci ] + runs-on: + group: aws-general-8-plus if: github.event_name != 'pull_request' - + permissions: contents: read packages: write @@ -63,12 +72,10 @@ jobs: image-name: - diffusers-pytorch-cpu - diffusers-pytorch-cuda - - diffusers-pytorch-compile-cuda + - diffusers-pytorch-cuda - diffusers-pytorch-xformers-cuda - - diffusers-flax-cpu - - diffusers-flax-tpu - - diffusers-onnxruntime-cpu - - diffusers-onnxruntime-cuda + - diffusers-pytorch-minimum-cuda + - diffusers-doc-builder steps: - name: Checkout repository @@ -90,24 +97,11 @@ jobs: - name: Post to a Slack channel id: slack - uses: slackapi/slack-github-action@6c661ce58804a1a20f6dc5fbee7f0381b469e001 + uses: huggingface/hf-workflows/.github/actions/post-slack@main with: # Slack channel id, channel name, or user id to post message. # See also: https://api.slack.com/methods/chat.postMessage#channels - channel-id: ${{ env.CI_SLACK_CHANNEL }} - # For posting a rich message using Block Kit - payload: | - { - "text": "${{ matrix.image-name }} Docker Image build result: ${{ job.status }}\n${{ github.event.head_commit.url }}", - "blocks": [ - { - "type": "section", - "text": { - "type": "mrkdwn", - "text": "${{ matrix.image-name }} Docker Image build result: ${{ job.status }}\n${{ github.event.head_commit.url }}" - } - } - ] - } - env: - SLACK_BOT_TOKEN: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }} + slack_channel: ${{ env.CI_SLACK_CHANNEL }} + title: "🤗 Results of the ${{ matrix.image-name }} Docker Image build" + status: ${{ job.status }} + slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }} diff --git a/.github/workflows/build_documentation.yml b/.github/workflows/build_documentation.yml index d9054928ed8d..6d4193e3cccc 100644 --- a/.github/workflows/build_documentation.yml +++ b/.github/workflows/build_documentation.yml @@ -21,7 +21,7 @@ jobs: package: diffusers notebook_folder: diffusers_doc languages: en ko zh ja pt - + custom_container: diffusers/diffusers-doc-builder secrets: token: ${{ secrets.HUGGINGFACE_PUSH }} hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }} diff --git a/.github/workflows/build_pr_documentation.yml b/.github/workflows/build_pr_documentation.yml index 8e19d8fafbe3..52e075733163 100644 --- a/.github/workflows/build_pr_documentation.yml +++ b/.github/workflows/build_pr_documentation.yml @@ -20,3 +20,4 @@ jobs: install_libgl1: true package: diffusers languages: en ko zh ja pt + custom_container: diffusers/diffusers-doc-builder diff --git a/.github/workflows/mirror_community_pipeline.yml b/.github/workflows/mirror_community_pipeline.yml new file mode 100644 index 000000000000..9cf573312b34 --- /dev/null +++ b/.github/workflows/mirror_community_pipeline.yml @@ -0,0 +1,102 @@ +name: Mirror Community Pipeline + +on: + # Push changes on the main branch + push: + branches: + - main + paths: + - 'examples/community/**.py' + + # And on tag creation (e.g. `v0.28.1`) + tags: + - '*' + + # Manual trigger with ref input + workflow_dispatch: + inputs: + ref: + description: "Either 'main' or a tag ref" + required: true + default: 'main' + +jobs: + mirror_community_pipeline: + env: + SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_COMMUNITY_MIRROR }} + + runs-on: ubuntu-22.04 + steps: + # Checkout to correct ref + # If workflow dispatch + # If ref is 'main', set: + # CHECKOUT_REF=refs/heads/main + # PATH_IN_REPO=main + # Else it must be a tag. Set: + # CHECKOUT_REF=refs/tags/{tag} + # PATH_IN_REPO={tag} + # If not workflow dispatch + # If ref is 'refs/heads/main' => set 'main' + # Else it must be a tag => set {tag} + - name: Set checkout_ref and path_in_repo + run: | + if [ "${{ github.event_name }}" == "workflow_dispatch" ]; then + if [ -z "${{ github.event.inputs.ref }}" ]; then + echo "Error: Missing ref input" + exit 1 + elif [ "${{ github.event.inputs.ref }}" == "main" ]; then + echo "CHECKOUT_REF=refs/heads/main" >> $GITHUB_ENV + echo "PATH_IN_REPO=main" >> $GITHUB_ENV + else + echo "CHECKOUT_REF=refs/tags/${{ github.event.inputs.ref }}" >> $GITHUB_ENV + echo "PATH_IN_REPO=${{ github.event.inputs.ref }}" >> $GITHUB_ENV + fi + elif [ "${{ github.ref }}" == "refs/heads/main" ]; then + echo "CHECKOUT_REF=${{ github.ref }}" >> $GITHUB_ENV + echo "PATH_IN_REPO=main" >> $GITHUB_ENV + else + # e.g. refs/tags/v0.28.1 -> v0.28.1 + echo "CHECKOUT_REF=${{ github.ref }}" >> $GITHUB_ENV + echo "PATH_IN_REPO=$(echo ${{ github.ref }} | sed 's/^refs\/tags\///')" >> $GITHUB_ENV + fi + - name: Print env vars + run: | + echo "CHECKOUT_REF: ${{ env.CHECKOUT_REF }}" + echo "PATH_IN_REPO: ${{ env.PATH_IN_REPO }}" + - uses: actions/checkout@v3 + with: + ref: ${{ env.CHECKOUT_REF }} + + # Setup + install dependencies + - name: Set up Python + uses: actions/setup-python@v4 + with: + python-version: "3.10" + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install --upgrade huggingface_hub + + # Check secret is set + - name: whoami + run: hf auth whoami + env: + HF_TOKEN: ${{ secrets.HF_TOKEN_MIRROR_COMMUNITY_PIPELINES }} + + # Push to HF! (under subfolder based on checkout ref) + # https://huggingface.co/datasets/diffusers/community-pipelines-mirror + - name: Mirror community pipeline to HF + run: hf upload diffusers/community-pipelines-mirror ./examples/community ${PATH_IN_REPO} --repo-type dataset + env: + PATH_IN_REPO: ${{ env.PATH_IN_REPO }} + HF_TOKEN: ${{ secrets.HF_TOKEN_MIRROR_COMMUNITY_PIPELINES }} + + - name: Report success status + if: ${{ success() }} + run: | + pip install requests && python utils/notify_community_pipelines_mirror.py --status=success + + - name: Report failure status + if: ${{ failure() }} + run: | + pip install requests && python utils/notify_community_pipelines_mirror.py --status=failure \ No newline at end of file diff --git a/.github/workflows/nightly_tests.yml b/.github/workflows/nightly_tests.yml index 2f73c66de829..479e5503eed2 100644 --- a/.github/workflows/nightly_tests.yml +++ b/.github/workflows/nightly_tests.yml @@ -7,19 +7,23 @@ on: env: DIFFUSERS_IS_CI: yes - HF_HOME: /mnt/cache + HF_HUB_ENABLE_HF_TRANSFER: 1 OMP_NUM_THREADS: 8 MKL_NUM_THREADS: 8 PYTEST_TIMEOUT: 600 RUN_SLOW: yes RUN_NIGHTLY: yes - PIPELINE_USAGE_CUTOFF: 5000 + PIPELINE_USAGE_CUTOFF: 0 SLACK_API_TOKEN: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }} + CONSOLIDATED_REPORT_PATH: consolidated_test_report.md jobs: setup_torch_cuda_pipeline_matrix: - name: Setup Torch Pipelines Matrix - runs-on: ubuntu-latest + name: Setup Torch Pipelines CUDA Slow Tests Matrix + runs-on: + group: aws-general-8-plus + container: + image: diffusers/diffusers-pytorch-cpu outputs: pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }} steps: @@ -27,13 +31,9 @@ jobs: uses: actions/checkout@v3 with: fetch-depth: 2 - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: "3.8" - name: Install dependencies run: | - pip install -e . + pip install -e .[test] pip install huggingface_hub - name: Fetch Pipeline Matrix id: fetch_pipeline_matrix @@ -44,22 +44,24 @@ jobs: - name: Pipeline Tests Artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: test-pipelines.json path: reports run_nightly_tests_for_torch_pipelines: - name: Torch Pipelines CUDA Nightly Tests + name: Nightly Torch Pipelines CUDA Tests needs: setup_torch_cuda_pipeline_matrix strategy: fail-fast: false + max-parallel: 8 matrix: module: ${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }} - runs-on: [single-gpu, nvidia-gpu, t4, ci] + runs-on: + group: aws-g4dn-2xlarge container: image: diffusers/diffusers-pytorch-cuda - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 + options: --shm-size "16gb" --ipc host --gpus all steps: - name: Checkout diffusers uses: actions/checkout@v3 @@ -67,61 +69,53 @@ jobs: fetch-depth: 2 - name: NVIDIA-SMI run: nvidia-smi - - name: Install dependencies run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git python -m uv pip install pytest-reportlog - - name: Environment run: | python utils/print_env.py - - - name: Nightly PyTorch CUDA checkpoint (pipelines) tests + - name: Pipeline CUDA Test env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms CUBLAS_WORKSPACE_CONFIG: :16:8 run: | python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ -s -v -k "not Flax and not Onnx" \ --make-reports=tests_pipeline_${{ matrix.module }}_cuda \ - --report-log=tests_pipeline_${{ matrix.module }}_cuda.log \ + --report-log=tests_pipeline_${{ matrix.module }}_cuda.log \ tests/pipelines/${{ matrix.module }} - - name: Failure short reports if: ${{ failure() }} run: | cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt - - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: pipeline_${{ matrix.module }}_test_reports path: reports - - - name: Generate Report and Notify Channel - if: always() - run: | - pip install slack_sdk tabulate - python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY run_nightly_tests_for_other_torch_modules: - name: Torch Non-Pipelines CUDA Nightly Tests - runs-on: docker-gpu + name: Nightly Torch CUDA Tests + runs-on: + group: aws-g4dn-2xlarge container: image: diffusers/diffusers-pytorch-cuda - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 + options: --shm-size "16gb" --ipc host --gpus all defaults: run: shell: bash strategy: + fail-fast: false + max-parallel: 2 matrix: - module: [models, schedulers, others, examples] + module: [models, schedulers, lora, others, single_file, examples] steps: - name: Checkout diffusers uses: actions/checkout@v3 @@ -132,283 +126,490 @@ jobs: run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git + python -m uv pip install peft@git+https://github.com/huggingface/peft.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git python -m uv pip install pytest-reportlog - - name: Environment run: python utils/print_env.py - name: Run nightly PyTorch CUDA tests for non-pipeline modules - if: ${{ matrix.module != 'examples'}} + if: ${{ matrix.module != 'examples'}} env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms CUBLAS_WORKSPACE_CONFIG: :16:8 run: | python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ -s -v -k "not Flax and not Onnx" \ --make-reports=tests_torch_${{ matrix.module }}_cuda \ - --report-log=tests_torch_${{ matrix.module }}_cuda.log \ + --report-log=tests_torch_${{ matrix.module }}_cuda.log \ tests/${{ matrix.module }} - name: Run nightly example tests with Torch if: ${{ matrix.module == 'examples' }} env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms CUBLAS_WORKSPACE_CONFIG: :16:8 run: | - python -m uv pip install peft@git+https://github.com/huggingface/peft.git python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ -s -v --make-reports=examples_torch_cuda \ - --report-log=examples_torch_cuda.log \ + --report-log=examples_torch_cuda.log \ examples/ - name: Failure short reports if: ${{ failure() }} run: | - cat reports/tests_torch_${{ matrix.module }}_cuda_stats.txt + cat reports/tests_torch_${{ matrix.module }}_cuda_stats.txt cat reports/tests_torch_${{ matrix.module }}_cuda_failures_short.txt - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: torch_${{ matrix.module }}_cuda_test_reports path: reports - - name: Generate Report and Notify Channel - if: always() - run: | - pip install slack_sdk tabulate - python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY + run_torch_compile_tests: + name: PyTorch Compile CUDA tests + + runs-on: + group: aws-g4dn-2xlarge - run_lora_nightly_tests: - name: Nightly LoRA Tests with PEFT and TORCH - runs-on: docker-gpu container: image: diffusers/diffusers-pytorch-cuda - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 - defaults: - run: - shell: bash + options: --gpus all --shm-size "16gb" --ipc host + steps: - name: Checkout diffusers uses: actions/checkout@v3 with: fetch-depth: 2 + - name: NVIDIA-SMI + run: | + nvidia-smi - name: Install dependencies run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git - python -m uv pip install peft@git+https://github.com/huggingface/peft.git - python -m uv pip install pytest-reportlog - + python -m uv pip install -e [quality,test,training] - name: Environment - run: python utils/print_env.py - - - name: Run nightly LoRA tests with PEFT and Torch + run: | + python utils/print_env.py + - name: Run torch compile tests on GPU env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} - # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms - CUBLAS_WORKSPACE_CONFIG: :16:8 + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + RUN_COMPILE: yes run: | - python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ - -s -v -k "not Flax and not Onnx" \ - --make-reports=tests_torch_lora_cuda \ - --report-log=tests_torch_lora_cuda.log \ - tests/lora - + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "compile" --make-reports=tests_torch_compile_cuda tests/ - name: Failure short reports if: ${{ failure() }} - run: | - cat reports/tests_torch_lora_cuda_stats.txt - cat reports/tests_torch_lora_cuda_failures_short.txt + run: cat reports/tests_torch_compile_cuda_failures_short.txt - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: - name: torch_lora_cuda_test_reports + name: torch_compile_test_reports path: reports - - name: Generate Report and Notify Channel - if: always() - run: | - pip install slack_sdk tabulate - python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY - - run_flax_tpu_tests: - name: Nightly Flax TPU Tests - runs-on: docker-tpu - if: github.event_name == 'schedule' - + run_big_gpu_torch_tests: + name: Torch tests on big GPU + strategy: + fail-fast: false + max-parallel: 2 + runs-on: + group: aws-g6e-xlarge-plus container: - image: diffusers/diffusers-flax-tpu - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged + image: diffusers/diffusers-pytorch-cuda + options: --shm-size "16gb" --ipc host --gpus all + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + - name: NVIDIA-SMI + run: nvidia-smi + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + python -m uv pip install peft@git+https://github.com/huggingface/peft.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git + python -m uv pip install pytest-reportlog + - name: Environment + run: | + python utils/print_env.py + - name: Selected Torch CUDA Test on big GPU + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + BIG_GPU_MEMORY: 40 + run: | + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + -m "big_accelerator" \ + --make-reports=tests_big_gpu_torch_cuda \ + --report-log=tests_big_gpu_torch_cuda.log \ + tests/ + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_big_gpu_torch_cuda_stats.txt + cat reports/tests_big_gpu_torch_cuda_failures_short.txt + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: torch_cuda_big_gpu_test_reports + path: reports + + torch_minimum_version_cuda_tests: + name: Torch Minimum Version CUDA Tests + runs-on: + group: aws-g4dn-2xlarge + container: + image: diffusers/diffusers-pytorch-minimum-cuda + options: --shm-size "16gb" --ipc host --gpus all defaults: run: shell: bash steps: - - name: Checkout diffusers - uses: actions/checkout@v3 - with: - fetch-depth: 2 - - - name: Install dependencies - run: | - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git - python -m uv pip install pytest-reportlog + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 - - name: Environment - run: python utils/print_env.py + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + python -m uv pip install peft@git+https://github.com/huggingface/peft.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git - - name: Run nightly Flax TPU tests - env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} - run: | - python -m pytest -n 0 \ - -s -v -k "Flax" \ - --make-reports=tests_flax_tpu \ - --report-log=tests_flax_tpu.log \ - tests/ + - name: Environment + run: | + python utils/print_env.py - - name: Failure short reports - if: ${{ failure() }} - run: | - cat reports/tests_flax_tpu_stats.txt - cat reports/tests_flax_tpu_failures_short.txt + - name: Run PyTorch CUDA tests + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + run: | + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + -s -v -k "not Flax and not Onnx" \ + --make-reports=tests_torch_minimum_version_cuda \ + tests/models/test_modeling_common.py \ + tests/pipelines/test_pipelines_common.py \ + tests/pipelines/test_pipeline_utils.py \ + tests/pipelines/test_pipelines.py \ + tests/pipelines/test_pipelines_auto.py \ + tests/schedulers/test_schedulers.py \ + tests/others - - name: Test suite reports artifacts - if: ${{ always() }} - uses: actions/upload-artifact@v2 - with: - name: flax_tpu_test_reports - path: reports + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_torch_minimum_version_cuda_stats.txt + cat reports/tests_torch_minimum_version_cuda_failures_short.txt - - name: Generate Report and Notify Channel - if: always() - run: | - pip install slack_sdk tabulate - python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: torch_minimum_version_cuda_test_reports + path: reports - run_nightly_onnx_tests: - name: Nightly ONNXRuntime CUDA tests on Ubuntu - runs-on: docker-gpu + run_nightly_quantization_tests: + name: Torch quantization nightly tests + strategy: + fail-fast: false + max-parallel: 2 + matrix: + config: + - backend: "bitsandbytes" + test_location: "bnb" + additional_deps: ["peft"] + - backend: "gguf" + test_location: "gguf" + additional_deps: ["peft", "kernels"] + - backend: "torchao" + test_location: "torchao" + additional_deps: [] + - backend: "optimum_quanto" + test_location: "quanto" + additional_deps: [] + - backend: "nvidia_modelopt" + test_location: "modelopt" + additional_deps: [] + runs-on: + group: aws-g6e-xlarge-plus container: - image: diffusers/diffusers-onnxruntime-cuda - options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ - - steps: - - name: Checkout diffusers - uses: actions/checkout@v3 - with: - fetch-depth: 2 - - - name: NVIDIA-SMI - run: nvidia-smi - - - name: Install dependencies - run: | - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git - python -m uv pip install pytest-reportlog - - - name: Environment - run: python utils/print_env.py - - - name: Run nightly ONNXRuntime CUDA tests - env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} - run: | - python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ - -s -v -k "Onnx" \ - --make-reports=tests_onnx_cuda \ - --report-log=tests_onnx_cuda.log \ - tests/ - - - name: Failure short reports - if: ${{ failure() }} - run: | - cat reports/tests_onnx_cuda_stats.txt - cat reports/tests_onnx_cuda_failures_short.txt - - - name: Test suite reports artifacts - if: ${{ always() }} - uses: actions/upload-artifact@v2 - with: - name: ${{ matrix.config.report }}_test_reports - path: reports - - - name: Generate Report and Notify Channel - if: always() - run: | - pip install slack_sdk tabulate - python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY - - run_nightly_tests_apple_m1: - name: Nightly PyTorch MPS tests on MacOS - runs-on: [ self-hosted, apple-m1 ] - if: github.event_name == 'schedule' - + image: diffusers/diffusers-pytorch-cuda + options: --shm-size "20gb" --ipc host --gpus all steps: - name: Checkout diffusers uses: actions/checkout@v3 with: fetch-depth: 2 - - - name: Clean checkout - shell: arch -arch arm64 bash {0} + - name: NVIDIA-SMI + run: nvidia-smi + - name: Install dependencies run: | - git clean -fxd - - - name: Setup miniconda - uses: ./.github/actions/setup-miniconda + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + python -m uv pip install -U ${{ matrix.config.backend }} + if [ "${{ join(matrix.config.additional_deps, ' ') }}" != "" ]; then + python -m uv pip install ${{ join(matrix.config.additional_deps, ' ') }} + fi + python -m uv pip install pytest-reportlog + - name: Environment + run: | + python utils/print_env.py + - name: ${{ matrix.config.backend }} quantization tests on GPU + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + BIG_GPU_MEMORY: 40 + run: | + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + --make-reports=tests_${{ matrix.config.backend }}_torch_cuda \ + --report-log=tests_${{ matrix.config.backend }}_torch_cuda.log \ + tests/quantization/${{ matrix.config.test_location }} + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_${{ matrix.config.backend }}_torch_cuda_stats.txt + cat reports/tests_${{ matrix.config.backend }}_torch_cuda_failures_short.txt + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 with: - python-version: 3.9 - + name: torch_cuda_${{ matrix.config.backend }}_reports + path: reports + + run_nightly_pipeline_level_quantization_tests: + name: Torch quantization nightly tests + strategy: + fail-fast: false + max-parallel: 2 + runs-on: + group: aws-g6e-xlarge-plus + container: + image: diffusers/diffusers-pytorch-cuda + options: --shm-size "20gb" --ipc host --gpus all + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + - name: NVIDIA-SMI + run: nvidia-smi - name: Install dependencies - shell: arch -arch arm64 bash {0} run: | - ${CONDA_RUN} python -m pip install --upgrade pip uv - ${CONDA_RUN} python -m uv pip install -e [quality,test] - ${CONDA_RUN} python -m uv pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu - ${CONDA_RUN} python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate - ${CONDA_RUN} python -m uv pip install pytest-reportlog - + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + python -m uv pip install -U bitsandbytes optimum_quanto + python -m uv pip install pytest-reportlog - name: Environment - shell: arch -arch arm64 bash {0} run: | - ${CONDA_RUN} python utils/print_env.py - - - name: Run nightly PyTorch tests on M1 (MPS) - shell: arch -arch arm64 bash {0} + python utils/print_env.py + - name: Pipeline-level quantization tests on GPU env: - HF_HOME: /System/Volumes/Data/mnt/cache - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + BIG_GPU_MEMORY: 40 run: | - ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \ - --report-log=tests_torch_mps.log \ - tests/ - + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + --make-reports=tests_pipeline_level_quant_torch_cuda \ + --report-log=tests_pipeline_level_quant_torch_cuda.log \ + tests/quantization/test_pipeline_level_quantization.py - name: Failure short reports if: ${{ failure() }} - run: cat reports/tests_torch_mps_failures_short.txt - + run: | + cat reports/tests_pipeline_level_quant_torch_cuda_stats.txt + cat reports/tests_pipeline_level_quant_torch_cuda_failures_short.txt - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: - name: torch_mps_test_reports + name: torch_cuda_pipeline_level_quant_reports path: reports - - name: Generate Report and Notify Channel - if: always() + generate_consolidated_report: + name: Generate Consolidated Test Report + needs: [ + run_nightly_tests_for_torch_pipelines, + run_nightly_tests_for_other_torch_modules, + run_torch_compile_tests, + run_big_gpu_torch_tests, + run_nightly_quantization_tests, + run_nightly_pipeline_level_quantization_tests, + # run_nightly_onnx_tests, + torch_minimum_version_cuda_tests, + # run_flax_tpu_tests + ] + if: always() + runs-on: + group: aws-general-8-plus + container: + image: diffusers/diffusers-pytorch-cpu + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: Create reports directory + run: mkdir -p combined_reports + + - name: Download all test reports + uses: actions/download-artifact@v4 + with: + path: artifacts + + - name: Prepare reports + run: | + # Move all report files to a single directory for processing + find artifacts -name "*.txt" -exec cp {} combined_reports/ \; + + - name: Install dependencies run: | + pip install -e .[test] pip install slack_sdk tabulate - python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY + + - name: Generate consolidated report + run: | + python utils/consolidated_test_report.py \ + --reports_dir combined_reports \ + --output_file $CONSOLIDATED_REPORT_PATH \ + --slack_channel_name diffusers-ci-nightly + + - name: Show consolidated report + run: | + cat $CONSOLIDATED_REPORT_PATH >> $GITHUB_STEP_SUMMARY + + - name: Upload consolidated report + uses: actions/upload-artifact@v4 + with: + name: consolidated_test_report + path: ${{ env.CONSOLIDATED_REPORT_PATH }} + +# M1 runner currently not well supported +# TODO: (Dhruv) add these back when we setup better testing for Apple Silicon +# run_nightly_tests_apple_m1: +# name: Nightly PyTorch MPS tests on MacOS +# runs-on: [ self-hosted, apple-m1 ] +# if: github.event_name == 'schedule' +# +# steps: +# - name: Checkout diffusers +# uses: actions/checkout@v3 +# with: +# fetch-depth: 2 +# +# - name: Clean checkout +# shell: arch -arch arm64 bash {0} +# run: | +# git clean -fxd +# - name: Setup miniconda +# uses: ./.github/actions/setup-miniconda +# with: +# python-version: 3.9 +# +# - name: Install dependencies +# shell: arch -arch arm64 bash {0} +# run: | +# ${CONDA_RUN} python -m pip install --upgrade pip uv +# ${CONDA_RUN} python -m uv pip install -e [quality,test] +# ${CONDA_RUN} python -m uv pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu +# ${CONDA_RUN} python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate +# ${CONDA_RUN} python -m uv pip install pytest-reportlog +# - name: Environment +# shell: arch -arch arm64 bash {0} +# run: | +# ${CONDA_RUN} python utils/print_env.py +# - name: Run nightly PyTorch tests on M1 (MPS) +# shell: arch -arch arm64 bash {0} +# env: +# HF_HOME: /System/Volumes/Data/mnt/cache +# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} +# run: | +# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \ +# --report-log=tests_torch_mps.log \ +# tests/ +# - name: Failure short reports +# if: ${{ failure() }} +# run: cat reports/tests_torch_mps_failures_short.txt +# +# - name: Test suite reports artifacts +# if: ${{ always() }} +# uses: actions/upload-artifact@v4 +# with: +# name: torch_mps_test_reports +# path: reports +# +# - name: Generate Report and Notify Channel +# if: always() +# run: | +# pip install slack_sdk tabulate +# python utils/log_reports.py >> $GITHUB_STEP_SUMMARY run_nightly_tests_apple_m1: +# name: Nightly PyTorch MPS tests on MacOS +# runs-on: [ self-hosted, apple-m1 ] +# if: github.event_name == 'schedule' +# +# steps: +# - name: Checkout diffusers +# uses: actions/checkout@v3 +# with: +# fetch-depth: 2 +# +# - name: Clean checkout +# shell: arch -arch arm64 bash {0} +# run: | +# git clean -fxd +# - name: Setup miniconda +# uses: ./.github/actions/setup-miniconda +# with: +# python-version: 3.9 +# +# - name: Install dependencies +# shell: arch -arch arm64 bash {0} +# run: | +# ${CONDA_RUN} python -m pip install --upgrade pip uv +# ${CONDA_RUN} python -m uv pip install -e [quality,test] +# ${CONDA_RUN} python -m uv pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu +# ${CONDA_RUN} python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate +# ${CONDA_RUN} python -m uv pip install pytest-reportlog +# - name: Environment +# shell: arch -arch arm64 bash {0} +# run: | +# ${CONDA_RUN} python utils/print_env.py +# - name: Run nightly PyTorch tests on M1 (MPS) +# shell: arch -arch arm64 bash {0} +# env: +# HF_HOME: /System/Volumes/Data/mnt/cache +# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} +# run: | +# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \ +# --report-log=tests_torch_mps.log \ +# tests/ +# - name: Failure short reports +# if: ${{ failure() }} +# run: cat reports/tests_torch_mps_failures_short.txt +# +# - name: Test suite reports artifacts +# if: ${{ always() }} +# uses: actions/upload-artifact@v4 +# with: +# name: torch_mps_test_reports +# path: reports +# +# - name: Generate Report and Notify Channel +# if: always() +# run: | +# pip install slack_sdk tabulate +# python utils/log_reports.py >> $GITHUB_STEP_SUMMARY diff --git a/.github/workflows/notify_slack_about_release.yml b/.github/workflows/notify_slack_about_release.yml index 95f2d0f917af..612ad4e24503 100644 --- a/.github/workflows/notify_slack_about_release.yml +++ b/.github/workflows/notify_slack_about_release.yml @@ -7,16 +7,16 @@ on: jobs: build: - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 - + - name: Setup Python uses: actions/setup-python@v4 with: python-version: '3.8' - + - name: Notify Slack about the release env: SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }} diff --git a/.github/workflows/pr_dependency_test.yml b/.github/workflows/pr_dependency_test.yml index f21f09ef875e..d9350c09ac42 100644 --- a/.github/workflows/pr_dependency_test.yml +++ b/.github/workflows/pr_dependency_test.yml @@ -16,7 +16,7 @@ concurrency: jobs: check_dependencies: - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 - name: Set up Python @@ -33,4 +33,3 @@ jobs: run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" pytest tests/others/test_dependencies.py - \ No newline at end of file diff --git a/.github/workflows/pr_flax_dependency_test.yml b/.github/workflows/pr_flax_dependency_test.yml deleted file mode 100644 index bbad72929917..000000000000 --- a/.github/workflows/pr_flax_dependency_test.yml +++ /dev/null @@ -1,38 +0,0 @@ -name: Run Flax dependency tests - -on: - pull_request: - branches: - - main - paths: - - "src/diffusers/**.py" - push: - branches: - - main - -concurrency: - group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} - cancel-in-progress: true - -jobs: - check_flax_dependencies: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v3 - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: "3.8" - - name: Install dependencies - run: | - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m pip install --upgrade pip uv - python -m uv pip install -e . - python -m uv pip install "jax[cpu]>=0.2.16,!=0.3.2" - python -m uv pip install "flax>=0.4.1" - python -m uv pip install "jaxlib>=0.1.65" - python -m uv pip install pytest - - name: Check for soft dependencies - run: | - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - pytest tests/others/test_dependencies.py diff --git a/.github/workflows/pr_test_peft_backend.yml b/.github/workflows/pr_modular_tests.yml similarity index 53% rename from .github/workflows/pr_test_peft_backend.yml rename to .github/workflows/pr_modular_tests.yml index ca91535f0274..75258771e4dc 100644 --- a/.github/workflows/pr_test_peft_backend.yml +++ b/.github/workflows/pr_modular_tests.yml @@ -1,12 +1,24 @@ -name: Fast tests for PRs - PEFT backend +name: Fast PR tests for Modular on: pull_request: - branches: - - main + branches: [main] paths: - - "src/diffusers/**.py" - - "tests/**.py" + - "src/diffusers/modular_pipelines/**.py" + - "src/diffusers/models/modeling_utils.py" + - "src/diffusers/models/model_loading_utils.py" + - "src/diffusers/pipelines/pipeline_utils.py" + - "src/diffusers/pipeline_loading_utils.py" + - "src/diffusers/loaders/lora_base.py" + - "src/diffusers/loaders/lora_pipeline.py" + - "src/diffusers/loaders/peft.py" + - "tests/modular_pipelines/**.py" + - ".github/**.yml" + - "utils/**.py" + - "setup.py" + push: + branches: + - ci-* concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} @@ -14,19 +26,20 @@ concurrency: env: DIFFUSERS_IS_CI: yes + HF_HUB_ENABLE_HF_TRANSFER: 1 OMP_NUM_THREADS: 4 MKL_NUM_THREADS: 4 PYTEST_TIMEOUT: 60 jobs: check_code_quality: - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: - python-version: "3.8" + python-version: "3.10" - name: Install dependencies run: | python -m pip install --upgrade pip @@ -40,13 +53,13 @@ jobs: check_repository_consistency: needs: check_code_quality - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: - python-version: "3.8" + python-version: "3.10" - name: Install dependencies run: | python -m pip install --upgrade pip @@ -55,6 +68,7 @@ jobs: run: | python utils/check_copies.py python utils/check_dummies.py + python utils/check_support_list.py make deps_table_check_updated - name: Check if failure if: ${{ failure() }} @@ -66,15 +80,20 @@ jobs: strategy: fail-fast: false matrix: - lib-versions: ["main", "latest"] + config: + - name: Fast PyTorch Modular Pipeline CPU tests + framework: pytorch_pipelines + runner: aws-highmemory-32-plus + image: diffusers/diffusers-pytorch-cpu + report: torch_cpu_modular_pipelines + name: ${{ matrix.config.name }} - name: LoRA - ${{ matrix.lib-versions }} - - runs-on: [ self-hosted, intel-cpu, 8-cpu, ci ] + runs-on: + group: ${{ matrix.config.runner }} container: - image: diffusers/diffusers-pytorch-cpu + image: ${{ matrix.config.image }} options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ defaults: @@ -91,23 +110,32 @@ jobs: run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" python -m uv pip install -e [quality,test] - if [ "${{ matrix.lib-versions }}" == "main" ]; then - python -m uv pip install -U peft@git+https://github.com/huggingface/peft.git - python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git - python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git - else - python -m uv pip install -U peft transformers accelerate - fi + pip uninstall transformers -y && pip uninstall huggingface_hub -y && python -m uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps - name: Environment run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" python utils/print_env.py - - name: Run fast PyTorch LoRA CPU tests with PEFT backend + - name: Run fast PyTorch Pipeline CPU tests + if: ${{ matrix.config.framework == 'pytorch_pipelines' }} run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \ - -s -v \ + python -m pytest -n 8 --max-worker-restart=0 --dist=loadfile \ + -s -v -k "not Flax and not Onnx" \ --make-reports=tests_${{ matrix.config.report }} \ - tests/lora/ + tests/modular_pipelines + + - name: Failure short reports + if: ${{ failure() }} + run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: pr_${{ matrix.config.framework }}_${{ matrix.config.report }}_test_reports + path: reports + + diff --git a/.github/workflows/pr_style_bot.yml b/.github/workflows/pr_style_bot.yml new file mode 100644 index 000000000000..c60004720783 --- /dev/null +++ b/.github/workflows/pr_style_bot.yml @@ -0,0 +1,17 @@ +name: PR Style Bot + +on: + issue_comment: + types: [created] + +permissions: + contents: write + pull-requests: write + +jobs: + style: + uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main + with: + python_quality_dependencies: "[quality]" + secrets: + bot_token: ${{ secrets.HF_STYLE_BOT_ACTION }} \ No newline at end of file diff --git a/.github/workflows/pr_test_fetcher.yml b/.github/workflows/pr_test_fetcher.yml index 4dbb118c6092..b032bb842786 100644 --- a/.github/workflows/pr_test_fetcher.yml +++ b/.github/workflows/pr_test_fetcher.yml @@ -15,7 +15,8 @@ concurrency: jobs: setup_pr_tests: name: Setup PR Tests - runs-on: docker-cpu + runs-on: + group: aws-general-8-plus container: image: diffusers/diffusers-pytorch-cpu options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ @@ -73,7 +74,8 @@ jobs: max-parallel: 2 matrix: modules: ${{ fromJson(needs.setup_pr_tests.outputs.matrix) }} - runs-on: docker-cpu + runs-on: + group: aws-general-8-plus container: image: diffusers/diffusers-pytorch-cpu options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ @@ -123,12 +125,13 @@ jobs: config: - name: Hub tests for models, schedulers, and pipelines framework: hub_tests_pytorch - runner: docker-cpu + runner: aws-general-8-plus image: diffusers/diffusers-pytorch-cpu report: torch_hub name: ${{ matrix.config.name }} - runs-on: ${{ matrix.config.runner }} + runs-on: + group: ${{ matrix.config.runner }} container: image: ${{ matrix.config.image }} options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ @@ -168,7 +171,7 @@ jobs: - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: pr_${{ matrix.config.report }}_test_reports path: reports diff --git a/.github/workflows/pr_tests.yml b/.github/workflows/pr_tests.yml index aa4afebb9cc1..1543b264b0cc 100644 --- a/.github/workflows/pr_tests.yml +++ b/.github/workflows/pr_tests.yml @@ -2,8 +2,7 @@ name: Fast tests for PRs on: pull_request: - branches: - - main + branches: [main] paths: - "src/diffusers/**.py" - "benchmarks/**.py" @@ -12,6 +11,7 @@ on: - "tests/**.py" - ".github/**.yml" - "utils/**.py" + - "setup.py" push: branches: - ci-* @@ -22,13 +22,14 @@ concurrency: env: DIFFUSERS_IS_CI: yes + HF_HUB_ENABLE_HF_TRANSFER: 1 OMP_NUM_THREADS: 4 MKL_NUM_THREADS: 4 PYTEST_TIMEOUT: 60 jobs: check_code_quality: - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 - name: Set up Python @@ -48,7 +49,7 @@ jobs: check_repository_consistency: needs: check_code_quality - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 - name: Set up Python @@ -63,6 +64,7 @@ jobs: run: | python utils/check_copies.py python utils/check_dummies.py + python utils/check_support_list.py make deps_table_check_updated - name: Check if failure if: ${{ failure() }} @@ -77,28 +79,24 @@ jobs: config: - name: Fast PyTorch Pipeline CPU tests framework: pytorch_pipelines - runner: [ self-hosted, intel-cpu, 32-cpu, 256-ram, ci ] + runner: aws-highmemory-32-plus image: diffusers/diffusers-pytorch-cpu report: torch_cpu_pipelines - name: Fast PyTorch Models & Schedulers CPU tests framework: pytorch_models - runner: [ self-hosted, intel-cpu, 8-cpu, ci ] + runner: aws-general-8-plus image: diffusers/diffusers-pytorch-cpu report: torch_cpu_models_schedulers - - name: Fast Flax CPU tests - framework: flax - runner: [ self-hosted, intel-cpu, 8-cpu, ci ] - image: diffusers/diffusers-flax-cpu - report: flax_cpu - name: PyTorch Example CPU tests framework: pytorch_examples - runner: [ self-hosted, intel-cpu, 8-cpu, ci ] + runner: aws-general-8-plus image: diffusers/diffusers-pytorch-cpu report: torch_example_cpu name: ${{ matrix.config.name }} - runs-on: ${{ matrix.config.runner }} + runs-on: + group: ${{ matrix.config.runner }} container: image: ${{ matrix.config.image }} @@ -118,7 +116,8 @@ jobs: run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" python -m uv pip install -e [quality,test] - python -m uv pip install accelerate + pip uninstall transformers -y && pip uninstall huggingface_hub -y && python -m uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps - name: Environment run: | @@ -143,21 +142,11 @@ jobs: --make-reports=tests_${{ matrix.config.report }} \ tests/models tests/schedulers tests/others - - name: Run fast Flax TPU tests - if: ${{ matrix.config.framework == 'flax' }} - run: | - apt-get update && apt-get install libsndfile1-dev libgl1 -y - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \ - -s -v -k "Flax" \ - --make-reports=tests_${{ matrix.config.report }} \ - tests - - name: Run example PyTorch CPU tests if: ${{ matrix.config.framework == 'pytorch_examples' }} run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m uv pip install peft + python -m uv pip install peft timm python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \ --make-reports=tests_${{ matrix.config.report }} \ examples @@ -168,9 +157,9 @@ jobs: - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: - name: pr_${{ matrix.config.report }}_test_reports + name: pr_${{ matrix.config.framework }}_${{ matrix.config.report }}_test_reports path: reports run_staging_tests: @@ -181,7 +170,8 @@ jobs: config: - name: Hub tests for models, schedulers, and pipelines framework: hub_tests_pytorch - runner: [ self-hosted, intel-cpu, 8-cpu, ci ] + runner: + group: aws-general-8-plus image: diffusers/diffusers-pytorch-cpu report: torch_hub @@ -228,7 +218,72 @@ jobs: - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: pr_${{ matrix.config.report }}_test_reports path: reports + + run_lora_tests: + needs: [check_code_quality, check_repository_consistency] + strategy: + fail-fast: false + + name: LoRA tests with PEFT main + + runs-on: + group: aws-general-8-plus + + container: + image: diffusers/diffusers-pytorch-cpu + options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ + + defaults: + run: + shell: bash + + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + # TODO (sayakpaul, DN6): revisit `--no-deps` + python -m pip install -U peft@git+https://github.com/huggingface/peft.git --no-deps + python -m uv pip install -U tokenizers + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps + pip uninstall transformers -y && pip uninstall huggingface_hub -y && python -m uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git + + - name: Environment + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python utils/print_env.py + + - name: Run fast PyTorch LoRA tests with PEFT + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \ + -s -v \ + --make-reports=tests_peft_main \ + tests/lora/ + python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \ + -s -v \ + --make-reports=tests_models_lora_peft_main \ + tests/models/ -k "lora" + + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_peft_main_failures_short.txt + cat reports/tests_models_lora_peft_main_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: pr_main_test_reports + path: reports + diff --git a/.github/workflows/pr_tests_gpu.yml b/.github/workflows/pr_tests_gpu.yml new file mode 100644 index 000000000000..89b6abe20d1e --- /dev/null +++ b/.github/workflows/pr_tests_gpu.yml @@ -0,0 +1,297 @@ +name: Fast GPU Tests on PR + +on: + pull_request: + branches: main + paths: + - "src/diffusers/models/modeling_utils.py" + - "src/diffusers/models/model_loading_utils.py" + - "src/diffusers/pipelines/pipeline_utils.py" + - "src/diffusers/pipeline_loading_utils.py" + - "src/diffusers/loaders/lora_base.py" + - "src/diffusers/loaders/lora_pipeline.py" + - "src/diffusers/loaders/peft.py" + - "tests/pipelines/test_pipelines_common.py" + - "tests/models/test_modeling_common.py" + - "examples/**/*.py" + workflow_dispatch: + +concurrency: + group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} + cancel-in-progress: true + +env: + DIFFUSERS_IS_CI: yes + OMP_NUM_THREADS: 8 + MKL_NUM_THREADS: 8 + HF_HUB_ENABLE_HF_TRANSFER: 1 + PYTEST_TIMEOUT: 600 + PIPELINE_USAGE_CUTOFF: 1000000000 # set high cutoff so that only always-test pipelines run + +jobs: + check_code_quality: + runs-on: ubuntu-22.04 + steps: + - uses: actions/checkout@v3 + - name: Set up Python + uses: actions/setup-python@v4 + with: + python-version: "3.8" + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install .[quality] + - name: Check quality + run: make quality + - name: Check if failure + if: ${{ failure() }} + run: | + echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make style && make quality'" >> $GITHUB_STEP_SUMMARY + + check_repository_consistency: + needs: check_code_quality + runs-on: ubuntu-22.04 + steps: + - uses: actions/checkout@v3 + - name: Set up Python + uses: actions/setup-python@v4 + with: + python-version: "3.8" + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install .[quality] + - name: Check repo consistency + run: | + python utils/check_copies.py + python utils/check_dummies.py + python utils/check_support_list.py + make deps_table_check_updated + - name: Check if failure + if: ${{ failure() }} + run: | + echo "Repo consistency check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make fix-copies'" >> $GITHUB_STEP_SUMMARY + + setup_torch_cuda_pipeline_matrix: + needs: [check_code_quality, check_repository_consistency] + name: Setup Torch Pipelines CUDA Slow Tests Matrix + runs-on: + group: aws-general-8-plus + container: + image: diffusers/diffusers-pytorch-cpu + outputs: + pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }} + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + - name: Environment + run: | + python utils/print_env.py + - name: Fetch Pipeline Matrix + id: fetch_pipeline_matrix + run: | + matrix=$(python utils/fetch_torch_cuda_pipeline_test_matrix.py) + echo $matrix + echo "pipeline_test_matrix=$matrix" >> $GITHUB_OUTPUT + - name: Pipeline Tests Artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: test-pipelines.json + path: reports + + torch_pipelines_cuda_tests: + name: Torch Pipelines CUDA Tests + needs: setup_torch_cuda_pipeline_matrix + strategy: + fail-fast: false + max-parallel: 8 + matrix: + module: ${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }} + runs-on: + group: aws-g4dn-2xlarge + container: + image: diffusers/diffusers-pytorch-cuda + options: --shm-size "16gb" --ipc host --gpus all + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: NVIDIA-SMI + run: | + nvidia-smi + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git + pip uninstall transformers -y && pip uninstall huggingface_hub -y && python -m uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git + + - name: Environment + run: | + python utils/print_env.py + - name: Extract tests + id: extract_tests + run: | + pattern=$(python utils/extract_tests_from_mixin.py --type pipeline) + echo "$pattern" > /tmp/test_pattern.txt + echo "pattern_file=/tmp/test_pattern.txt" >> $GITHUB_OUTPUT + + - name: PyTorch CUDA checkpoint tests on Ubuntu + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + run: | + if [ "${{ matrix.module }}" = "ip_adapters" ]; then + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + -s -v -k "not Flax and not Onnx" \ + --make-reports=tests_pipeline_${{ matrix.module }}_cuda \ + tests/pipelines/${{ matrix.module }} + else + pattern=$(cat ${{ steps.extract_tests.outputs.pattern_file }}) + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + -s -v -k "not Flax and not Onnx and $pattern" \ + --make-reports=tests_pipeline_${{ matrix.module }}_cuda \ + tests/pipelines/${{ matrix.module }} + fi + + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt + cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: pipeline_${{ matrix.module }}_test_reports + path: reports + + torch_cuda_tests: + name: Torch CUDA Tests + needs: [check_code_quality, check_repository_consistency] + runs-on: + group: aws-g4dn-2xlarge + container: + image: diffusers/diffusers-pytorch-cuda + options: --shm-size "16gb" --ipc host --gpus all + defaults: + run: + shell: bash + strategy: + fail-fast: false + max-parallel: 4 + matrix: + module: [models, schedulers, lora, others] + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + python -m uv pip install peft@git+https://github.com/huggingface/peft.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git + pip uninstall transformers -y && pip uninstall huggingface_hub -y && python -m uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git + + - name: Environment + run: | + python utils/print_env.py + + - name: Extract tests + id: extract_tests + run: | + pattern=$(python utils/extract_tests_from_mixin.py --type ${{ matrix.module }}) + echo "$pattern" > /tmp/test_pattern.txt + echo "pattern_file=/tmp/test_pattern.txt" >> $GITHUB_OUTPUT + + - name: Run PyTorch CUDA tests + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + run: | + pattern=$(cat ${{ steps.extract_tests.outputs.pattern_file }}) + if [ -z "$pattern" ]; then + python -m pytest -n 1 -sv --max-worker-restart=0 --dist=loadfile -k "not Flax and not Onnx" tests/${{ matrix.module }} \ + --make-reports=tests_torch_cuda_${{ matrix.module }} + else + python -m pytest -n 1 -sv --max-worker-restart=0 --dist=loadfile -k "not Flax and not Onnx and $pattern" tests/${{ matrix.module }} \ + --make-reports=tests_torch_cuda_${{ matrix.module }} + fi + + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_torch_cuda_${{ matrix.module }}_stats.txt + cat reports/tests_torch_cuda_${{ matrix.module }}_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: torch_cuda_test_reports_${{ matrix.module }} + path: reports + + run_examples_tests: + name: Examples PyTorch CUDA tests on Ubuntu + needs: [check_code_quality, check_repository_consistency] + runs-on: + group: aws-g4dn-2xlarge + + container: + image: diffusers/diffusers-pytorch-cuda + options: --gpus all --shm-size "16gb" --ipc host + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: NVIDIA-SMI + run: | + nvidia-smi + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + pip uninstall transformers -y && pip uninstall huggingface_hub -y && python -m uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git + python -m uv pip install -e [quality,test,training] + + - name: Environment + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python utils/print_env.py + + - name: Run example tests on GPU + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install timm + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=examples_torch_cuda examples/ + + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/examples_torch_cuda_stats.txt + cat reports/examples_torch_cuda_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: examples_test_reports + path: reports + diff --git a/.github/workflows/pr_torch_dependency_test.yml b/.github/workflows/pr_torch_dependency_test.yml index 16a7724fe744..c39d5eca2d9a 100644 --- a/.github/workflows/pr_torch_dependency_test.yml +++ b/.github/workflows/pr_torch_dependency_test.yml @@ -16,7 +16,7 @@ concurrency: jobs: check_torch_dependencies: - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 - name: Set up Python diff --git a/.github/workflows/push_tests.yml b/.github/workflows/push_tests.yml index 36f011407901..6896e0145cbb 100644 --- a/.github/workflows/push_tests.yml +++ b/.github/workflows/push_tests.yml @@ -1,6 +1,7 @@ -name: Slow Tests on main +name: Fast GPU Tests on main on: + workflow_dispatch: push: branches: - main @@ -11,17 +12,19 @@ on: env: DIFFUSERS_IS_CI: yes - HF_HOME: /mnt/cache OMP_NUM_THREADS: 8 MKL_NUM_THREADS: 8 + HF_HUB_ENABLE_HF_TRANSFER: 1 PYTEST_TIMEOUT: 600 - RUN_SLOW: yes PIPELINE_USAGE_CUTOFF: 50000 jobs: setup_torch_cuda_pipeline_matrix: name: Setup Torch Pipelines CUDA Slow Tests Matrix - runs-on: ubuntu-latest + runs-on: + group: aws-general-8-plus + container: + image: diffusers/diffusers-pytorch-cpu outputs: pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }} steps: @@ -29,14 +32,13 @@ jobs: uses: actions/checkout@v3 with: fetch-depth: 2 - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: "3.8" - name: Install dependencies run: | - pip install -e . - pip install huggingface_hub + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + - name: Environment + run: | + python utils/print_env.py - name: Fetch Pipeline Matrix id: fetch_pipeline_matrix run: | @@ -45,22 +47,24 @@ jobs: echo "pipeline_test_matrix=$matrix" >> $GITHUB_OUTPUT - name: Pipeline Tests Artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: test-pipelines.json path: reports torch_pipelines_cuda_tests: - name: Torch Pipelines CUDA Slow Tests + name: Torch Pipelines CUDA Tests needs: setup_torch_cuda_pipeline_matrix strategy: fail-fast: false + max-parallel: 8 matrix: module: ${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }} - runs-on: [single-gpu, nvidia-gpu, t4, ci] + runs-on: + group: aws-g4dn-2xlarge container: image: diffusers/diffusers-pytorch-cuda - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 + options: --shm-size "16gb" --ipc host --gpus all steps: - name: Checkout diffusers uses: actions/checkout@v3 @@ -71,16 +75,15 @@ jobs: nvidia-smi - name: Install dependencies run: | - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git - name: Environment run: | python utils/print_env.py - - name: Slow PyTorch CUDA checkpoint tests on Ubuntu + - name: PyTorch CUDA checkpoint tests on Ubuntu env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms CUBLAS_WORKSPACE_CONFIG: :16:8 run: | @@ -93,26 +96,28 @@ jobs: run: | cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt - - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: pipeline_${{ matrix.module }}_test_reports path: reports torch_cuda_tests: name: Torch CUDA Tests - runs-on: docker-gpu + runs-on: + group: aws-g4dn-2xlarge container: image: diffusers/diffusers-pytorch-cuda - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 + options: --shm-size "16gb" --ipc host --gpus all defaults: run: shell: bash strategy: + fail-fast: false + max-parallel: 2 matrix: - module: [models, schedulers, lora, others] + module: [models, schedulers, lora, others, single_file] steps: - name: Checkout diffusers uses: actions/checkout@v3 @@ -121,194 +126,48 @@ jobs: - name: Install dependencies run: | - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git + python -m uv pip install peft@git+https://github.com/huggingface/peft.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git - name: Environment run: | python utils/print_env.py - - name: Run slow PyTorch CUDA tests + - name: Run PyTorch CUDA tests env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms CUBLAS_WORKSPACE_CONFIG: :16:8 run: | python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ -s -v -k "not Flax and not Onnx" \ - --make-reports=tests_torch_cuda \ + --make-reports=tests_torch_cuda_${{ matrix.module }} \ tests/${{ matrix.module }} - name: Failure short reports if: ${{ failure() }} run: | - cat reports/tests_torch_cuda_stats.txt - cat reports/tests_torch_cuda_failures_short.txt + cat reports/tests_torch_cuda_${{ matrix.module }}_stats.txt + cat reports/tests_torch_cuda_${{ matrix.module }}_failures_short.txt - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: - name: torch_cuda_test_reports - path: reports - - peft_cuda_tests: - name: PEFT CUDA Tests - runs-on: docker-gpu - container: - image: diffusers/diffusers-pytorch-cuda - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 - defaults: - run: - shell: bash - steps: - - name: Checkout diffusers - uses: actions/checkout@v3 - with: - fetch-depth: 2 - - - name: Install dependencies - run: | - - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git - python -m uv pip install peft@git+https://github.com/huggingface/peft.git - - - name: Environment - run: | - python utils/print_env.py - - - name: Run slow PEFT CUDA tests - env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} - # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms - CUBLAS_WORKSPACE_CONFIG: :16:8 - run: | - python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ - -s -v -k "not Flax and not Onnx and not PEFTLoRALoading" \ - --make-reports=tests_peft_cuda \ - tests/lora/ - - - name: Failure short reports - if: ${{ failure() }} - run: | - cat reports/tests_peft_cuda_stats.txt - cat reports/tests_peft_cuda_failures_short.txt - - - name: Test suite reports artifacts - if: ${{ always() }} - uses: actions/upload-artifact@v2 - with: - name: torch_peft_test_reports - path: reports - - flax_tpu_tests: - name: Flax TPU Tests - runs-on: docker-tpu - container: - image: diffusers/diffusers-flax-tpu - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged - defaults: - run: - shell: bash - steps: - - name: Checkout diffusers - uses: actions/checkout@v3 - with: - fetch-depth: 2 - - - name: Install dependencies - run: | - - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git - - - name: Environment - run: | - python utils/print_env.py - - - name: Run slow Flax TPU tests - env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} - run: | - python -m pytest -n 0 \ - -s -v -k "Flax" \ - --make-reports=tests_flax_tpu \ - tests/ - - - name: Failure short reports - if: ${{ failure() }} - run: | - cat reports/tests_flax_tpu_stats.txt - cat reports/tests_flax_tpu_failures_short.txt - - - name: Test suite reports artifacts - if: ${{ always() }} - uses: actions/upload-artifact@v2 - with: - name: flax_tpu_test_reports - path: reports - - onnx_cuda_tests: - name: ONNX CUDA Tests - runs-on: docker-gpu - container: - image: diffusers/diffusers-onnxruntime-cuda - options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 - defaults: - run: - shell: bash - steps: - - name: Checkout diffusers - uses: actions/checkout@v3 - with: - fetch-depth: 2 - - - name: Install dependencies - run: | - - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m uv pip install -e [quality,test] - python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git - - - name: Environment - run: | - python utils/print_env.py - - - name: Run slow ONNXRuntime CUDA tests - env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} - run: | - python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ - -s -v -k "Onnx" \ - --make-reports=tests_onnx_cuda \ - tests/ - - - name: Failure short reports - if: ${{ failure() }} - run: | - cat reports/tests_onnx_cuda_stats.txt - cat reports/tests_onnx_cuda_failures_short.txt - - - name: Test suite reports artifacts - if: ${{ always() }} - uses: actions/upload-artifact@v2 - with: - name: onnx_cuda_test_reports + name: torch_cuda_test_reports_${{ matrix.module }} path: reports run_torch_compile_tests: name: PyTorch Compile CUDA tests - runs-on: docker-gpu + runs-on: + group: aws-g4dn-2xlarge container: - image: diffusers/diffusers-pytorch-compile-cuda - options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ + image: diffusers/diffusers-pytorch-cuda + options: --gpus all --shm-size "16gb" --ipc host steps: - name: Checkout diffusers @@ -328,7 +187,8 @@ jobs: python utils/print_env.py - name: Run example tests on GPU env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + RUN_COMPILE: yes run: | python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "compile" --make-reports=tests_torch_compile_cuda tests/ - name: Failure short reports @@ -337,7 +197,7 @@ jobs: - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: torch_compile_test_reports path: reports @@ -345,11 +205,12 @@ jobs: run_xformers_tests: name: PyTorch xformers CUDA tests - runs-on: docker-gpu + runs-on: + group: aws-g4dn-2xlarge container: image: diffusers/diffusers-pytorch-xformers-cuda - options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ + options: --gpus all --shm-size "16gb" --ipc host steps: - name: Checkout diffusers @@ -369,7 +230,7 @@ jobs: python utils/print_env.py - name: Run example tests on GPU env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} run: | python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "xformers" --make-reports=tests_torch_xformers_cuda tests/ - name: Failure short reports @@ -378,7 +239,7 @@ jobs: - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: torch_xformers_test_reports path: reports @@ -386,12 +247,12 @@ jobs: run_examples_tests: name: Examples PyTorch CUDA tests on Ubuntu - runs-on: docker-gpu + runs-on: + group: aws-g4dn-2xlarge container: image: diffusers/diffusers-pytorch-cuda - options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ - + options: --gpus all --shm-size "16gb" --ipc host steps: - name: Checkout diffusers uses: actions/checkout@v3 @@ -401,7 +262,6 @@ jobs: - name: NVIDIA-SMI run: | nvidia-smi - - name: Install dependencies run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" @@ -414,9 +274,10 @@ jobs: - name: Run example tests on GPU env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install timm python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=examples_torch_cuda examples/ - name: Failure short reports @@ -427,7 +288,7 @@ jobs: - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: examples_test_reports - path: reports \ No newline at end of file + path: reports diff --git a/.github/workflows/push_tests_fast.yml b/.github/workflows/push_tests_fast.yml index 7c50da7b5c34..e274cb021892 100644 --- a/.github/workflows/push_tests_fast.yml +++ b/.github/workflows/push_tests_fast.yml @@ -18,6 +18,7 @@ env: HF_HOME: /mnt/cache OMP_NUM_THREADS: 8 MKL_NUM_THREADS: 8 + HF_HUB_ENABLE_HF_TRANSFER: 1 PYTEST_TIMEOUT: 600 RUN_SLOW: no @@ -29,28 +30,19 @@ jobs: config: - name: Fast PyTorch CPU tests on Ubuntu framework: pytorch - runner: [ self-hosted, intel-cpu, 8-cpu, ci ] + runner: aws-general-8-plus image: diffusers/diffusers-pytorch-cpu report: torch_cpu - - name: Fast Flax CPU tests on Ubuntu - framework: flax - runner: [ self-hosted, intel-cpu, 8-cpu, ci ] - image: diffusers/diffusers-flax-cpu - report: flax_cpu - - name: Fast ONNXRuntime CPU tests on Ubuntu - framework: onnxruntime - runner: [ self-hosted, intel-cpu, 8-cpu, ci ] - image: diffusers/diffusers-onnxruntime-cpu - report: onnx_cpu - name: PyTorch Example CPU tests on Ubuntu framework: pytorch_examples - runner: [ self-hosted, intel-cpu, 8-cpu, ci ] + runner: aws-general-8-plus image: diffusers/diffusers-pytorch-cpu report: torch_example_cpu name: ${{ matrix.config.name }} - runs-on: ${{ matrix.config.runner }} + runs-on: + group: ${{ matrix.config.runner }} container: image: ${{ matrix.config.image }} @@ -85,29 +77,11 @@ jobs: --make-reports=tests_${{ matrix.config.report }} \ tests/ - - name: Run fast Flax TPU tests - if: ${{ matrix.config.framework == 'flax' }} - run: | - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \ - -s -v -k "Flax" \ - --make-reports=tests_${{ matrix.config.report }} \ - tests/ - - - name: Run fast ONNXRuntime CPU tests - if: ${{ matrix.config.framework == 'onnxruntime' }} - run: | - python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \ - -s -v -k "Onnx" \ - --make-reports=tests_${{ matrix.config.report }} \ - tests/ - - name: Run example PyTorch CPU tests if: ${{ matrix.config.framework == 'pytorch_examples' }} run: | python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" - python -m uv pip install peft + python -m uv pip install peft timm python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \ --make-reports=tests_${{ matrix.config.report }} \ examples @@ -118,7 +92,7 @@ jobs: - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: pr_${{ matrix.config.report }}_test_reports path: reports diff --git a/.github/workflows/push_tests_mps.yml b/.github/workflows/push_tests_mps.yml index 3a14f856346b..eb6c0da22541 100644 --- a/.github/workflows/push_tests_mps.yml +++ b/.github/workflows/push_tests_mps.yml @@ -1,18 +1,14 @@ name: Fast mps tests on main on: - push: - branches: - - main - paths: - - "src/diffusers/**.py" - - "tests/**.py" + workflow_dispatch: env: DIFFUSERS_IS_CI: yes HF_HOME: /mnt/cache OMP_NUM_THREADS: 8 MKL_NUM_THREADS: 8 + HF_HUB_ENABLE_HF_TRANSFER: 1 PYTEST_TIMEOUT: 600 RUN_SLOW: no @@ -23,7 +19,7 @@ concurrency: jobs: run_fast_tests_apple_m1: name: Fast PyTorch MPS tests on MacOS - runs-on: [ self-hosted, apple-m1 ] + runs-on: macos-13-xlarge steps: - name: Checkout diffusers @@ -45,7 +41,7 @@ jobs: shell: arch -arch arm64 bash {0} run: | ${CONDA_RUN} python -m pip install --upgrade pip uv - ${CONDA_RUN} python -m uv pip install -e [quality,test] + ${CONDA_RUN} python -m uv pip install -e ".[quality,test]" ${CONDA_RUN} python -m uv pip install torch torchvision torchaudio ${CONDA_RUN} python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git ${CONDA_RUN} python -m uv pip install transformers --upgrade @@ -59,7 +55,7 @@ jobs: shell: arch -arch arm64 bash {0} env: HF_HOME: /System/Volumes/Data/mnt/cache - HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }} + HF_TOKEN: ${{ secrets.HF_TOKEN }} run: | ${CONDA_RUN} python -m pytest -n 0 -s -v --make-reports=tests_torch_mps tests/ @@ -69,7 +65,7 @@ jobs: - name: Test suite reports artifacts if: ${{ always() }} - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: pr_torch_mps_test_reports path: reports diff --git a/.github/workflows/pypi_publish.yaml b/.github/workflows/pypi_publish.yaml index 54e9afe6d9b7..dc36b6b024c5 100644 --- a/.github/workflows/pypi_publish.yaml +++ b/.github/workflows/pypi_publish.yaml @@ -10,7 +10,7 @@ on: jobs: find-and-checkout-latest-branch: - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 outputs: latest_branch: ${{ steps.set_latest_branch.outputs.latest_branch }} steps: @@ -29,46 +29,46 @@ jobs: LATEST_BRANCH=$(python utils/fetch_latest_release_branch.py) echo "Latest branch: $LATEST_BRANCH" echo "latest_branch=$LATEST_BRANCH" >> $GITHUB_ENV - + - name: Set latest branch output id: set_latest_branch run: echo "::set-output name=latest_branch::${{ env.latest_branch }}" release: needs: find-and-checkout-latest-branch - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - name: Checkout Repo uses: actions/checkout@v3 with: ref: ${{ needs.find-and-checkout-latest-branch.outputs.latest_branch }} - + - name: Setup Python uses: actions/setup-python@v4 with: python-version: "3.8" - + - name: Install dependencies run: | python -m pip install --upgrade pip pip install -U setuptools wheel twine pip install -U torch --index-url https://download.pytorch.org/whl/cpu pip install -U transformers - + - name: Build the dist files run: python setup.py bdist_wheel && python setup.py sdist - + - name: Publish to the test PyPI env: TWINE_USERNAME: ${{ secrets.TEST_PYPI_USERNAME }} TWINE_PASSWORD: ${{ secrets.TEST_PYPI_PASSWORD }} - run: twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/ + run: twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/ - name: Test installing diffusers and importing run: | pip install diffusers && pip uninstall diffusers -y - pip install -i https://testpypi.python.org/pypi diffusers + pip install -i https://test.pypi.org/simple/ diffusers python -c "from diffusers import __version__; print(__version__)" python -c "from diffusers import DiffusionPipeline; pipe = DiffusionPipeline.from_pretrained('fusing/unet-ldm-dummy-update'); pipe()" python -c "from diffusers import DiffusionPipeline; pipe = DiffusionPipeline.from_pretrained('hf-internal-testing/tiny-stable-diffusion-pipe', safety_checker=None); pipe('ah suh du')" diff --git a/.github/workflows/release_tests_fast.yml b/.github/workflows/release_tests_fast.yml new file mode 100644 index 000000000000..81a34f7a464d --- /dev/null +++ b/.github/workflows/release_tests_fast.yml @@ -0,0 +1,351 @@ +# Duplicate workflow to push_tests.yml that is meant to run on release/patch branches as a final check +# Creating a duplicate workflow here is simpler than adding complex path/branch parsing logic to push_tests.yml +# Needs to be updated if push_tests.yml updated +name: (Release) Fast GPU Tests on main + +on: + push: + branches: + - "v*.*.*-release" + - "v*.*.*-patch" + +env: + DIFFUSERS_IS_CI: yes + OMP_NUM_THREADS: 8 + MKL_NUM_THREADS: 8 + PYTEST_TIMEOUT: 600 + PIPELINE_USAGE_CUTOFF: 50000 + +jobs: + setup_torch_cuda_pipeline_matrix: + name: Setup Torch Pipelines CUDA Slow Tests Matrix + runs-on: + group: aws-general-8-plus + container: + image: diffusers/diffusers-pytorch-cpu + outputs: + pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }} + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + - name: Environment + run: | + python utils/print_env.py + - name: Fetch Pipeline Matrix + id: fetch_pipeline_matrix + run: | + matrix=$(python utils/fetch_torch_cuda_pipeline_test_matrix.py) + echo $matrix + echo "pipeline_test_matrix=$matrix" >> $GITHUB_OUTPUT + - name: Pipeline Tests Artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: test-pipelines.json + path: reports + + torch_pipelines_cuda_tests: + name: Torch Pipelines CUDA Tests + needs: setup_torch_cuda_pipeline_matrix + strategy: + fail-fast: false + max-parallel: 8 + matrix: + module: ${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }} + runs-on: + group: aws-g4dn-2xlarge + container: + image: diffusers/diffusers-pytorch-cuda + options: --shm-size "16gb" --ipc host --gpus all + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + - name: NVIDIA-SMI + run: | + nvidia-smi + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git + - name: Environment + run: | + python utils/print_env.py + - name: Slow PyTorch CUDA checkpoint tests on Ubuntu + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + run: | + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + -s -v -k "not Flax and not Onnx" \ + --make-reports=tests_pipeline_${{ matrix.module }}_cuda \ + tests/pipelines/${{ matrix.module }} + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt + cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: pipeline_${{ matrix.module }}_test_reports + path: reports + + torch_cuda_tests: + name: Torch CUDA Tests + runs-on: + group: aws-g4dn-2xlarge + container: + image: diffusers/diffusers-pytorch-cuda + options: --shm-size "16gb" --ipc host --gpus all + defaults: + run: + shell: bash + strategy: + fail-fast: false + max-parallel: 2 + matrix: + module: [models, schedulers, lora, others, single_file] + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + python -m uv pip install peft@git+https://github.com/huggingface/peft.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git + + - name: Environment + run: | + python utils/print_env.py + + - name: Run PyTorch CUDA tests + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + run: | + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + -s -v -k "not Flax and not Onnx" \ + --make-reports=tests_torch_${{ matrix.module }}_cuda \ + tests/${{ matrix.module }} + + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_torch_${{ matrix.module }}_cuda_stats.txt + cat reports/tests_torch_${{ matrix.module }}_cuda_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: torch_cuda_${{ matrix.module }}_test_reports + path: reports + + torch_minimum_version_cuda_tests: + name: Torch Minimum Version CUDA Tests + runs-on: + group: aws-g4dn-2xlarge + container: + image: diffusers/diffusers-pytorch-minimum-cuda + options: --shm-size "16gb" --ipc host --gpus all + defaults: + run: + shell: bash + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + python -m uv pip install peft@git+https://github.com/huggingface/peft.git + pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git + + - name: Environment + run: | + python utils/print_env.py + + - name: Run PyTorch CUDA tests + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + # https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms + CUBLAS_WORKSPACE_CONFIG: :16:8 + run: | + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ + -s -v -k "not Flax and not Onnx" \ + --make-reports=tests_torch_minimum_cuda \ + tests/models/test_modeling_common.py \ + tests/pipelines/test_pipelines_common.py \ + tests/pipelines/test_pipeline_utils.py \ + tests/pipelines/test_pipelines.py \ + tests/pipelines/test_pipelines_auto.py \ + tests/schedulers/test_schedulers.py \ + tests/others + + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/tests_torch_minimum_version_cuda_stats.txt + cat reports/tests_torch_minimum_version_cuda_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: torch_minimum_version_cuda_test_reports + path: reports + + run_torch_compile_tests: + name: PyTorch Compile CUDA tests + + runs-on: + group: aws-g4dn-2xlarge + + container: + image: diffusers/diffusers-pytorch-cuda + options: --gpus all --shm-size "16gb" --ipc host + + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: NVIDIA-SMI + run: | + nvidia-smi + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test,training] + - name: Environment + run: | + python utils/print_env.py + - name: Run torch compile tests on GPU + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + RUN_COMPILE: yes + run: | + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "compile" --make-reports=tests_torch_compile_cuda tests/ + - name: Failure short reports + if: ${{ failure() }} + run: cat reports/tests_torch_compile_cuda_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: torch_compile_test_reports + path: reports + + run_xformers_tests: + name: PyTorch xformers CUDA tests + + runs-on: + group: aws-g4dn-2xlarge + + container: + image: diffusers/diffusers-pytorch-xformers-cuda + options: --gpus all --shm-size "16gb" --ipc host + + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: NVIDIA-SMI + run: | + nvidia-smi + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test,training] + - name: Environment + run: | + python utils/print_env.py + - name: Run example tests on GPU + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + run: | + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "xformers" --make-reports=tests_torch_xformers_cuda tests/ + - name: Failure short reports + if: ${{ failure() }} + run: cat reports/tests_torch_xformers_cuda_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: torch_xformers_test_reports + path: reports + + run_examples_tests: + name: Examples PyTorch CUDA tests on Ubuntu + + runs-on: + group: aws-g4dn-2xlarge + + container: + image: diffusers/diffusers-pytorch-cuda + options: --gpus all --shm-size "16gb" --ipc host + + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: NVIDIA-SMI + run: | + nvidia-smi + + - name: Install dependencies + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test,training] + + - name: Environment + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python utils/print_env.py + + - name: Run example tests on GPU + env: + HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }} + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install timm + python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=examples_torch_cuda examples/ + + - name: Failure short reports + if: ${{ failure() }} + run: | + cat reports/examples_torch_cuda_stats.txt + cat reports/examples_torch_cuda_failures_short.txt + + - name: Test suite reports artifacts + if: ${{ always() }} + uses: actions/upload-artifact@v4 + with: + name: examples_test_reports + path: reports diff --git a/.github/workflows/run_tests_from_a_pr.yml b/.github/workflows/run_tests_from_a_pr.yml new file mode 100644 index 000000000000..c8eee8dbbc33 --- /dev/null +++ b/.github/workflows/run_tests_from_a_pr.yml @@ -0,0 +1,74 @@ +name: Check running SLOW tests from a PR (only GPU) + +on: + workflow_dispatch: + inputs: + docker_image: + default: 'diffusers/diffusers-pytorch-cuda' + description: 'Name of the Docker image' + required: true + pr_number: + description: 'PR number to test on' + required: true + test: + description: 'Tests to run (e.g.: `tests/models`).' + required: true + +env: + DIFFUSERS_IS_CI: yes + IS_GITHUB_CI: "1" + HF_HOME: /mnt/cache + OMP_NUM_THREADS: 8 + MKL_NUM_THREADS: 8 + PYTEST_TIMEOUT: 600 + RUN_SLOW: yes + +jobs: + run_tests: + name: "Run a test on our runner from a PR" + runs-on: + group: aws-g4dn-2xlarge + container: + image: ${{ github.event.inputs.docker_image }} + options: --gpus all --privileged --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/ + + steps: + - name: Validate test files input + id: validate_test_files + env: + PY_TEST: ${{ github.event.inputs.test }} + run: | + if [[ ! "$PY_TEST" =~ ^tests/ ]]; then + echo "Error: The input string must start with 'tests/'." + exit 1 + fi + + if [[ ! "$PY_TEST" =~ ^tests/(models|pipelines|lora) ]]; then + echo "Error: The input string must contain either 'models', 'pipelines', or 'lora' after 'tests/'." + exit 1 + fi + + if [[ "$PY_TEST" == *";"* ]]; then + echo "Error: The input string must not contain ';'." + exit 1 + fi + echo "$PY_TEST" + + shell: bash -e {0} + + - name: Checkout PR branch + uses: actions/checkout@v4 + with: + ref: refs/pull/${{ inputs.pr_number }}/head + + - name: Install pytest + run: | + python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" + python -m uv pip install -e [quality,test] + python -m uv pip install peft + + - name: Run tests + env: + PY_TEST: ${{ github.event.inputs.test }} + run: | + pytest "$PY_TEST" diff --git a/.github/workflows/ssh-pr-runner.yml b/.github/workflows/ssh-pr-runner.yml new file mode 100644 index 000000000000..49fa9c0ad24d --- /dev/null +++ b/.github/workflows/ssh-pr-runner.yml @@ -0,0 +1,40 @@ +name: SSH into PR runners + +on: + workflow_dispatch: + inputs: + docker_image: + description: 'Name of the Docker image' + required: true + +env: + IS_GITHUB_CI: "1" + HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }} + HF_HOME: /mnt/cache + DIFFUSERS_IS_CI: yes + OMP_NUM_THREADS: 8 + MKL_NUM_THREADS: 8 + RUN_SLOW: yes + +jobs: + ssh_runner: + name: "SSH" + runs-on: + group: aws-highmemory-32-plus + container: + image: ${{ github.event.inputs.docker_image }} + options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --privileged + + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: Tailscale # In order to be able to SSH when a test fails + uses: huggingface/tailscale-action@main + with: + authkey: ${{ secrets.TAILSCALE_SSH_AUTHKEY }} + slackChannel: ${{ secrets.SLACK_CIFEEDBACK_CHANNEL }} + slackToken: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }} + waitForSSH: true diff --git a/.github/workflows/ssh-runner.yml b/.github/workflows/ssh-runner.yml new file mode 100644 index 000000000000..917eb5b1b31a --- /dev/null +++ b/.github/workflows/ssh-runner.yml @@ -0,0 +1,52 @@ +name: SSH into GPU runners + +on: + workflow_dispatch: + inputs: + runner_type: + description: 'Type of runner to test (aws-g6-4xlarge-plus: a10, aws-g4dn-2xlarge: t4, aws-g6e-xlarge-plus: L40)' + type: choice + required: true + options: + - aws-g6-4xlarge-plus + - aws-g4dn-2xlarge + - aws-g6e-xlarge-plus + docker_image: + description: 'Name of the Docker image' + required: true + +env: + IS_GITHUB_CI: "1" + HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }} + HF_HOME: /mnt/cache + DIFFUSERS_IS_CI: yes + OMP_NUM_THREADS: 8 + MKL_NUM_THREADS: 8 + RUN_SLOW: yes + +jobs: + ssh_runner: + name: "SSH" + runs-on: + group: "${{ github.event.inputs.runner_type }}" + container: + image: ${{ github.event.inputs.docker_image }} + options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus all --privileged + + steps: + - name: Checkout diffusers + uses: actions/checkout@v3 + with: + fetch-depth: 2 + + - name: NVIDIA-SMI + run: | + nvidia-smi + + - name: Tailscale # In order to be able to SSH when a test fails + uses: huggingface/tailscale-action@main + with: + authkey: ${{ secrets.TAILSCALE_SSH_AUTHKEY }} + slackChannel: ${{ secrets.SLACK_CIFEEDBACK_CHANNEL }} + slackToken: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }} + waitForSSH: true diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml index ff609ee76946..27450ed4c7f2 100644 --- a/.github/workflows/stale.yml +++ b/.github/workflows/stale.yml @@ -8,7 +8,10 @@ jobs: close_stale_issues: name: Close Stale Issues if: github.repository == 'huggingface/diffusers' - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 + permissions: + issues: write + pull-requests: write env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} steps: diff --git a/.github/workflows/trufflehog.yml b/.github/workflows/trufflehog.yml new file mode 100644 index 000000000000..4743dc352455 --- /dev/null +++ b/.github/workflows/trufflehog.yml @@ -0,0 +1,18 @@ +on: + push: + +name: Secret Leaks + +jobs: + trufflehog: + runs-on: ubuntu-22.04 + steps: + - name: Checkout code + uses: actions/checkout@v4 + with: + fetch-depth: 0 + - name: Secret Scanning + uses: trufflesecurity/trufflehog@main + with: + extra_args: --results=verified,unknown + diff --git a/.github/workflows/typos.yml b/.github/workflows/typos.yml index fbd051b4da0d..6d2f2fc8dd9a 100644 --- a/.github/workflows/typos.yml +++ b/.github/workflows/typos.yml @@ -5,7 +5,7 @@ on: jobs: build: - runs-on: ubuntu-latest + runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v3 diff --git a/.github/workflows/update_metadata.yml b/.github/workflows/update_metadata.yml index 33d162ef8d1f..92aea0369ba8 100644 --- a/.github/workflows/update_metadata.yml +++ b/.github/workflows/update_metadata.yml @@ -25,6 +25,6 @@ jobs: - name: Update metadata env: - HUGGING_FACE_HUB_TOKEN: ${{ secrets.DIFFUSERS_BOT_TOKEN }} + HF_TOKEN: ${{ secrets.SAYAK_HF_TOKEN }} run: | python utils/update_metadata.py --commit_sha ${{ github.sha }} diff --git a/.gitignore b/.gitignore index 9d74fe840449..15617d5fdc74 100644 --- a/.gitignore +++ b/.gitignore @@ -175,4 +175,4 @@ tags .ruff_cache # wandb -wandb +wandb \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 887e4dd43c45..ec18df882641 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,4 +1,4 @@ - + +# Outpainting + +Outpainting extends an image beyond its original boundaries, allowing you to add, replace, or modify visual elements in an image while preserving the original image. Like [inpainting](../using-diffusers/inpaint), you want to fill the white area (in this case, the area outside of the original image) with new visual elements while keeping the original image (represented by a mask of black pixels). There are a couple of ways to outpaint, such as with a [ControlNet](https://hf.co/blog/OzzyGT/outpainting-controlnet) or with [Differential Diffusion](https://hf.co/blog/OzzyGT/outpainting-differential-diffusion). + +This guide will show you how to outpaint with an inpainting model, ControlNet, and a ZoeDepth estimator. + +Before you begin, make sure you have the [controlnet_aux](https://github.com/huggingface/controlnet_aux) library installed so you can use the ZoeDepth estimator. + +```py +!pip install -q controlnet_aux +``` + +## Image preparation + +Start by picking an image to outpaint with and remove the background with a Space like [BRIA-RMBG-1.4](https://hf.co/spaces/briaai/BRIA-RMBG-1.4). + + + +For example, remove the background from this image of a pair of shoes. + +
Project Name | +Description | +
---|---|
dream-textures | +Stable Diffusion built-in to Blender | +
HiDiffusion | +Increases the resolution and speed of your diffusion model by only adding a single line of code | +
IC-Light | +IC-Light is a project to manipulate the illumination of images | +
InstantID | +InstantID : Zero-shot Identity-Preserving Generation in Seconds | +
IOPaint | +Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures. | +
Kohya | +Gradio GUI for Kohya's Stable Diffusion trainers | +
MagicAnimate | +MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model | +
OOTDiffusion | +Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on | +
SD.Next | +SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models | +
stable-dreamfusion | +Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion | +
StoryDiffusion | +StoryDiffusion can create a magic story by generating consistent images and videos. | +
StreamDiffusion | +A Pipeline-Level Solution for Real-Time Interactive Generation | +
Stable Diffusion Server | +A server configured for Inpainting/Generation/img2img with one stable diffusion model | +
Model Search | +Search models on Civitai and Hugging Face | +
Skrample | +Fully modular scheduler functions with 1st class diffusers integration. | +