Skip to content

Conversation

@simonjbeaumont
Copy link
Contributor

In HA, during the recovery process of failed hosts, there were data races due to the conflicts between the liveness information of a returning host and its live status as determined by the xha daemon.
These commits are fixing all the problems found when Xapi_hooks.host_post_declare_dead, which is called on every dead hosts, took few minutes to complete, leaving a large window to the dead hosts to come back in the pool.

Jerome Maloberti added 4 commits October 17, 2013 14:07
During the return in a pool of a slave host that rebooted,
the PBD.plug will fail if the slave is not set as alive.
It is anyway an operation that is performed on the master.

Signed-off-by: Jerome Maloberti <[email protected]>
In HA the host live value can be different from the liveset determined
by xha, allowing a user to start a VM on host that just came back in
a pool, but before its recovery process finished.
This commit fix this problem by forbidding the live value to be changed
outside of HA recovery process, the HA liveset becomes the only liveness
state of a host in HA.

Signed-off-by: Jerome Maloberti <[email protected]>
Previously, after a host failure in HA, the function
Xapi_ha_vm_failover.compute_restart_plan which choose which hosts
should restart the failed VMs would pickup host that are live and
enabled. In some cases, a host may be live but not in the HA
live_set, for example if it returned in the pool before the HA
recovery process finished.
This situation is bad since the live host would have new VMs
started on it, while later it may be marked as dead, once the
HA recovery process finished.
This commit add the live_set parameter to compute_restart_plan
and all functions that need it by transitivity.
Some functions are called during the HA recovery, where the
live_set is available, or at startup, in this case the live_set
is created from all live and enabled hosts.

Signed-off-by: Jerome Maloberti <[email protected]>
When some hosts are considered dead in HA, restart_auto_run_vms were
processing them in this way:
 - for each dead host
   - List all resident VMs
   - Host.set_live=false
   - call Xapi_hooks.host_post_declare_dead which can be very long
   - set all resident VM to `Halted (including Control Domain)
This process was conflicting with db_sync if a host had the bad taste
of coming back to life while restart_auto_run_vms was stuck in
host_post_declare_dead.
This commit reorder the actions to put the shortest first:
 - for each dead host
   - set all resident VMs excluding Control Domain to `Halted
   - Host.set_live=false
 - for each dead host
   - call Xapi_hooks.host_post_declare_dead

Signed-off-by: Jerome Maloberti <[email protected]>

Conflicts:
	ocaml/xapi/xapi_ha_vm_failover.ml

Resolved-by: Si Beaumont <[email protected]>
@simonjbeaumont
Copy link
Contributor Author

Originally committed in #1387.

@ghost ghost assigned simonjbeaumont Oct 22, 2013
simonjbeaumont added a commit that referenced this pull request Oct 22, 2013
[HFX-954] Fix data races in HA when slaves return in the pool.
@simonjbeaumont simonjbeaumont merged commit 4ba168e into xapi-project:sanibel-lcm Oct 22, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant