Skip to content

Conversation

@wking
Copy link
Member

@wking wking commented Feb 11, 2021

a9e075a (#508) made us more robust to situations where we are canceled after acquiring the leader lock but before we got into Operator.Run's UntilWithContext. However, there was still a bug from cc1921d (#424) where we had not acquired the leader lock. postMainContext is used for metrics, informers, and the leader election loop. We used to only call postMainCancel after reaping the main goroutine, and obviously that will only work if we've launched the main goroutine. This commit adds a new launchedMain to track that. If launchedMain is true, we get the old handling. If launchedMain is still false when runContext.Done, we now call postMainCancel without waiting to reap a non-existent main goroutine.

There's also a new postMainCancel when the shutdown timer expires. I don't expect us to ever need that, but it protects us from future bugs like this one.

I've added launchedMain without guarding it behind a lock, and it is touched by both the main Options.run goroutine and the leader-election callback. So there's a racy chance of:

  1. Options.run goroutine: runContext canceled, so runContext.Done()
  2. Leader-election goroutine: Leader lock acquired
  3. Options.run goroutine: !launchedMain, so we call postMainCancel()
  4. Leader-election goroutine: launchedMain set true
  5. Leader-election goroutine: launches the main goroutine via CVO.Run(runContext, ...)

I'm trusting Operator.Run to respect runContext there and not do anything significant, so the fact that we are already tearing down all
the post-main stuff won't cause problems. Previous fixes like a9e075a will help with that. But there could still be bugs in Operator.Run. A lock around launchedMain that avoided calling Operator.Run when runContext was already done would protect against that, but seems like overkill in an already complicated goroutine tangle. Without the lock, we just have to field and fix any future Operator.Run runContext issues as we find them.

a9e075a (pkg/cvo/cvo: Guard Operator.Run goroutine handling from
early cancels, 2021-01-28, openshift#508) made us more robust to situations
where we are canceled after acquiring the leader lock but before we
got into Operator.Run's UntilWithContext.  However, there was still a
bug from cc1921d (pkg/start: Release leader lease on graceful
shutdown, 2020-08-03, openshift#424) where we had not acquired the leader lock
[1].  postMainContext is used for metrics, informers, and the leader
election loop.  We used to only call postMainCancel after reaping the
main goroutine, and obviously that will only work if we've launched
the main goroutine.  This commit adds a new launchedMain to track
that.  If launchedMain is true, we get the old handling.  If
launchedMain is still false when runContext.Done, we now call
postMainCancel without waiting to reap a non-existent main goroutine.

There's also a new postMainCancel when the shutdown timer expires.  I
don't expect us to ever need that, but it protects us from future bugs
like this one.

I've added launchedMain without guarding it behind a lock, and it is
touched by both the main Options.run goroutine and the leader-election
callback.  So there's a racy chance of:

1. Options.run goroutine: runContext canceled, so runContext.Done() matches
2. Leader-election goroutine: Leader lock acquired
3. Options.run goroutine: !launchedMain, so we call postMainCancel()
4. Leader-election goroutine: launchedMain set true
5. Leader-election goroutine: launches the main goroutine via CVO.Run(runContext, ...)

I'm trusting Operator.Run to respect runContext there and not do
anything significant, so the fact that we are already tearing down all
the post-main stuff won't cause problems.  Previous fixes like
a9e075a will help with that.  But there could still be bugs in
Operator.Run.  A lock around launchedMain that avoided calling
Operator.Run when runContext was already done would protect against
that, but seems like overkill in an already complicated goroutine
tangle.  Without the lock, we just have to field and fix any future
Operator.Run runContext issues as we find them.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1927944
@openshift-ci-robot
Copy link
Contributor

@wking: This pull request references Bugzilla bug 1927944, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.8.0) matches configured target release for branch (4.8.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
Details

In response to this:

Bug 1927944: pkg/start: Fix shutdown deadlock when die before getting a leader lock

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Feb 11, 2021
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 11, 2021
@wking wking force-pushed the post-main-when-main-never-launched branch from 2cf4082 to 7ae934a Compare February 11, 2021 22:50
@jottofar
Copy link
Contributor

I wouldn't consider adding a lock around launchedMain as adding much complexity especially if we think it's at all likely it may introduce a future bug. On the other hand, I know not having the lock could introduce a different logic flow but shouldn't controllerCtx.CVO.Run support exiting "cleanly" whenever runContext is cancelled?

@jottofar
Copy link
Contributor

/lgtm

Discussed my above comment with Trevor on slack and we agreed that if controllerCtx.CVO.Run cannot handle runContext cancellation it most likely is a bug unrelated to launchedMain locking and something we want to fix.

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 15, 2021
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jottofar, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

2 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit e89a171 into openshift:master Feb 16, 2021
@openshift-ci-robot
Copy link
Contributor

@wking: All pull requests linked via external trackers have merged:

Bugzilla bug 1927944 has been moved to the MODIFIED state.

Details

In response to this:

Bug 1927944: pkg/start: Fix shutdown deadlock when die before getting a leader lock

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants