Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion manifests/0000_50_olm_00-namespace.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ kind: Namespace
metadata:
name: openshift-operator-lifecycle-manager
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: "v1.24"
openshift.io/scc: "anyuid"
Copy link
Contributor

@camilamacedo86 camilamacedo86 Aug 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/hold

We just label it to enforce baseline because the pod requires more permissions than restricted (default enforcement in ocp 4.12) Can we ensure that all pods can run as restricted now? If so, why do we need to enforce as restricted? Is it to test and ensure that we will not break anything?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all pods running in the openshift-operator-lifecycle-manager are running under the restricted-v2 SCC, thus the PSA enforcement=restricted should be fine.

$ oc project
Using project "openshift-operator-lifecycle-manager" on server "https://api.bparees.devcluster.openshift.com:6443".

$ oc get pods -o yaml | grep scc
      openshift.io/scc: restricted-v2
      openshift.io/scc: restricted-v2
      openshift.io/scc: restricted-v2
      openshift.io/scc: restricted-v2
      openshift.io/scc: restricted-v2
      openshift.io/scc: restricted-v2
      openshift.io/scc: restricted-v2
      openshift.io/scc: restricted-v2

Copy link
Contributor

@camilamacedo86 camilamacedo86 Aug 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, so we merge the changes 🚀

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/hold cancel

openshift.io/cluster-monitoring: "true"
annotations:
Expand All @@ -16,7 +18,7 @@ kind: Namespace
metadata:
name: openshift-operators
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: "v1.24"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

until we have the OLM logic in place to label "openshift-* namespaces that have operators installed" for labelsyncing, we should probably leave this explicit setting in place (or even set it to privileged), so that operators that are installed in this NS don't get rejected by PSA.

cc @perdasilva

Copy link
Contributor

@camilamacedo86 camilamacedo86 Aug 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see 2 options:
a) Label as privileged ( we do not know what will be required for ANY operator installed on this one ).
b) OR ONLY add the label sync security.openshift.io/scc.podSecurityLabelSync=true (which is what we will add when we have the controller)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a good point. I've modified the PR to go with option 1 and explicitly label it as privileged, so that when https://issues.redhat.com/browse/OLM-2695 is ready we have to make a conscious effort to remove any hard-coding of labels in this namespace.

openshift.io/scc: "anyuid"
annotations:
Expand Down
3 changes: 2 additions & 1 deletion scripts/generate_crds_manifests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -402,4 +402,5 @@ add_ibm_managed_cloud_annotations "${ROOT_DIR}/manifests"
find "${ROOT_DIR}/manifests" -type f -exec $SED -i "/^#/d" {} \;
find "${ROOT_DIR}/manifests" -type f -exec $SED -i "1{/---/d}" {} \;

${YQ} delete --inplace -d'0' manifests/0000_50_olm_00-namespace.yaml 'metadata.labels."pod-security.kubernetes.io/enforce*"'
# (anik120): uncomment this once https://issues.redhat.com/browse/OLM-2695 is Done.
#${YQ} delete --inplace -d'1' manifests/0000_50_olm_00-namespace.yaml 'metadata.labels."pod-security.kubernetes.io/enforce*"'
5 changes: 4 additions & 1 deletion values.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
installType: ocp
rbacApiVersion: rbac.authorization.k8s.io
namespace: openshift-operator-lifecycle-manager
namespace_psa:
enforceLevel: restricted
enforceVersion: '"v1.24"'
catalog_namespace: openshift-marketplace
operator_namespace: openshift-operators
operator_namespace_psa:
enforceLevel: baseline
enforceLevel: privileged
enforceVersion: '"v1.24"'
imagestream: true
writeStatusName: operator-lifecycle-manager
Expand Down