Skip to content

[stable33] Handle duplicate key on trash unique constraint during concurrent deletes#4498

Open
backportbot[bot] wants to merge 2 commits intostable33from
backport/4496/stable33
Open

[stable33] Handle duplicate key on trash unique constraint during concurrent deletes#4498
backportbot[bot] wants to merge 2 commits intostable33from
backport/4496/stable33

Conversation

@backportbot
Copy link

@backportbot backportbot bot commented Mar 10, 2026

Backport of #4496

Warning, This backport's changes differ from the original and might be incomplete ⚠️

Todo

  • Review and resolve any conflicts
  • Amend HEAD commit to remove the line stating to skip CI

Learn more about backports at https://docs.nextcloud.com/server/stable/go.php?to=developer-backports.

@backportbot backportbot bot requested review from i2h3 and provokateurin March 10, 2026 10:03
@backportbot backportbot bot added bug 3. to review Items that need to be reviewed needs review Needs confirmation this is still happening or relevant (or possible duplicate TBD) labels Mar 10, 2026
@backportbot backportbot bot added this to the Nextcloud 33 milestone Mar 10, 2026
@i2h3 i2h3 marked this pull request as ready for review March 10, 2026 11:19
@i2h3 i2h3 enabled auto-merge March 10, 2026 11:20
@provokateurin
Copy link
Member

@i2h3 You need to remove [skip ci] from the commit message of the last commit.

i2h3 and others added 2 commits March 11, 2026 09:27
…t deletes

The unique constraint on oc_group_folders_trash (folder_id, name,
deleted_time) uses second-granularity timestamps. When multiple files
with the same base name inside the same group folder are deleted within
the same second (e.g. by the desktop client issuing bulk DELETEs), the
INSERT in TrashManager::addTrashItem() fails with a unique constraint
violation. Because the file has already been physically moved to trash
storage before the INSERT, the failed DB record leaves the file
orphaned — resulting in data loss.

Insert the trash record before moving the file, using a for-loop that
retries with an incremented deleted_time on constraint collision. This
way the on-disk trash name is constructed from the confirmed timestamp
and no renames are needed. If the subsequent file move fails, the DB
record is cleaned up.

This follows the existing pattern in VersionsBackend.php for handling
the same class of constraint violation.

Fixes: #4014

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Iva Horn <iva.horn@nextcloud.com>
- Wrap moveFromStorage() in try/catch to clean up the DB record when

the move throws an exception (not just when it returns false)

- Fix test: use assertCount on full trash list instead of filtering by

getName() which returns the trash filename with .d suffix

- Make test deterministic by waiting for a new-second boundary before

both moveToTrash() calls so they always land in the same second

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Signed-off-by: Iva Horn <iva.horn@nextcloud.com>
@i2h3 i2h3 force-pushed the backport/4496/stable33 branch from ff98973 to 6f0a5dc Compare March 11, 2026 08:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

3. to review Items that need to be reviewed bug needs review Needs confirmation this is still happening or relevant (or possible duplicate TBD)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants