Skip to content

Conversation

@ksala
Copy link

@ksala ksala commented Jan 15, 2018

This add support for adding the NodeSelector label to the Postgresql kind.
We use this in a GKE cluster which has two node pools, default and preemptible, to have the Postgres pods scheduled only on the normal nodes.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.002%) to 10.999% when pulling fdc1223 on ksala:master into 8c9033d on zalando-incubator:master.

Copy link
Contributor

@alexeyklyukin alexeyklyukin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code at cluster.go:compareStatefulSetWith needs to be extended to replace the statefulset when the nodeSelector changes and to report the differences (in the logs). That should require a rolling upgrade, as old pods might have been scheduled to the wrong node according to the new nodeSelector.

ClusterName string `json:"-"`
Databases map[string]string `json:"databases,omitempty"`
Tolerations []v1.Toleration `json:"tolerations,omitempty"`
NodeSelector map[string]string `json:"NodeSelector,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, use the camelCase for the parameter name

@cehoffman
Copy link

I think having per cluster node affinity is useful. Did you know this can be done for all postgresql cluster operator config key node_readiness_label with a value of the form <label>:<value>,<2nd label>:<2nd value>. For example, in our configuration to only run on nodes with portworx.

  node_readiness_label: "px/enabled:true"

@Jan-M
Copy link
Member

Jan-M commented Feb 5, 2018

Yes, this does sounds like a reasonable use case. But maybe our naming of the variable is confusing then or there should be a second one that is explicit for this.

I also see this as a possible per cluster config option, in a way to add additional labels.

@alexeyklyukin
Copy link
Contributor

@ksala I think what you are doing here can be indeed achieved with a node_readiness_label parameter as described at at the procedure for doing cluster upgrades. In addition to making sure your pod is scheduled on the nodes with a certain label, this feature also migrates master pods for you when the node label changes to indicate the node is no longer part of the 'ready' node.

Do you see any use case outside of choosing the ready/not-ready nodes that can be covered by setting node labels per-cluster?

@sdudoladov
Copy link
Member

sdudoladov commented Apr 3, 2018

what you are doing here can be indeed achieved with a node_readiness_label parameter

Given that, and also the fact that the nodeSelector will be deprecated in favor of node affinity (see "Affinity and anti-affinity"), I suggest we close this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants