-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
feat: Limit ExpireTrash job to 30 minutes
#51600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Limit ExpireTrash job to 30 minutes
#51600
Conversation
ea30adf to
310945d
Compare
come-nc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering if we should use DateTime instead of int 🤷
|
Instead of adding a limit to |
310945d to
23a5330
Compare
Good idea and easy to implement :). Just pushed that version in the latest commit. Will remove the other change after review. Note that this probably need to be backported |
23a5330 to
7c5449b
Compare
|
Looks good, you can also re-implement |
7c5449b to
f5217f4
Compare
ExpireTrash job to 30 minutes
f7ec0e9 to
700f297
Compare
700f297 to
b7cd6fb
Compare
b7cd6fb to
e28515f
Compare
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]> [skip ci]
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]> [skip ci]
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
…lel run - Follow-up of #51600 The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed. But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster. This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit. Signed-off-by: Louis Chemineau <[email protected]>
getSeenUsersto iterate over userscallForSeenUsersto usegetSeenUsersExpireTrashjob to 30 minutes