Skip to content

Conversation

@hongkailiu
Copy link
Member

@hongkailiu hongkailiu commented Aug 13, 2025

The bug OCPBUGS-57585 is in the VERIFIED state and the fix is built in the recent nightly.

Will rebase after #30014 gets in.

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 13, 2025
@openshift-ci openshift-ci bot requested review from jaypoulz and rexagod August 13, 2025 20:36
@hongkailiu
Copy link
Member Author

Some testing results:

launch 4.20.0-0.nightly-2025-08-12-153542 aws,single-node

$ oc get clusterversion version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.20.0-0.nightly-2025-08-12-153542   True        False         51m     Cluster version is 4.20.0-0.nightly-2025-08-12-153542

$ git --no-pager log --pretty=oneline -1
7df7a08f31f9f4eadff3186be34a2da81f4b9e47 (HEAD -> fix-OCPBUGS-57585, hongkailiu/fix-OCPBUGS-57585) Stop ignoring the targets from openshift-cluster-version

$ make WHAT=cmd/openshift-tests

$ MONITORING_AUTH_TEST_NAMESPACE=openshift-cluster-version KUBECONFIG=/Users/hongkliu/.kube/config AZURE_AUTH_LOCATION=/tmp/osServicePrincipal.json ./openshift-tests run-test "[sig-instrumentation][Late] Platform Prometheus targets should not be accessible without auth [Serial] [Suite:openshift/conformance/serial]"
  I0813 18:13:11.095219   97204 i18n.go:119] Couldn't find the LC_ALL, LC_MESSAGES or LANG environment variables, defaulting to en_US
  I0813 18:13:11.095382   97204 i18n.go:157] Setting language to default
openshift-tests v4.1.0-9662-g7df7a08
  I0813 18:13:12.657520   97204 test_setup.go:94] Extended test version v4.1.0-9662-g7df7a08
  I0813 18:13:12.657652   97204 test_context.go:558] Tolerating taints "node-role.kubernetes.io/control-plane" when considering if nodes are ready
  I0813 18:13:12.753833 97204 framework.go:2317] microshift-version configmap not found
  I0813 18:13:12.753895   97204 binary.go:111] Loaded test configuration: &framework.TestContextType{KubeConfig:"/Users/hongkliu/.kube/config", KubeContext:"", KubeAPIContentType:"application/vnd.kubernetes.protobuf", KubeletRootDir:"/var/lib/kubelet", KubeletConfigDropinDir:"", CertDir:"", Host:"https://api.ci-ln-lf956j2-76ef8.aws-2.ci.openshift.org:6443", BearerToken:"IBqhvOcI6VXhlh_Y", RepoRoot:"../../", ListImages:false, listTests:false, listLabels:false, ListConformanceTests:false, Provider:"aws", Tooling:"", timeouts:framework.TimeoutContext{Poll:2000000000, PodStart:300000000000, PodStartShort:120000000000, PodStartSlow:900000000000, PodDelete:300000000000, ClaimProvision:300000000000, DataSourceProvision:300000000000, ClaimProvisionShort:60000000000, ClaimBound:180000000000, PVReclaim:180000000000, PVBound:180000000000, PVCreate:180000000000, PVDelete:300000000000, PVDeleteSlow:1200000000000, SnapshotCreate:300000000000, SnapshotDelete:300000000000, SnapshotControllerMetrics:300000000000, SystemPodsStartup:600000000000, NodeSchedulable:1800000000000, SystemDaemonsetStartup:300000000000, NodeNotReady:180000000000}, CloudConfig:framework.CloudConfig{APIEndpoint:"", ProjectID:"", Zone:"us-west-1c", Zones:[]string{"us-west-1c"}, Region:"us-west-1", MultiZone:false, MultiMaster:false, Cluster:"", MasterName:"", NodeInstanceGroup:"", NumNodes:0, ClusterIPRange:"", ClusterTag:"", Network:"", ConfigFile:"", NodeTag:"", MasterTag:"", Provider:(*aws.Provider)(0x10dd1a848)}, KubectlPath:"kubectl", OutputDir:"/tmp", ReportDir:"", ReportPrefix:"", ReportCompleteGinkgo:false, ReportCompleteJUnit:false, Prefix:"e2e", MinStartupPods:-1, EtcdUpgradeStorage:"", EtcdUpgradeVersion:"", GCEUpgradeScript:"", ContainerRuntimeEndpoint:"unix:///run/containerd/containerd.sock", ContainerRuntimeProcessName:"containerd", ContainerRuntimePidFile:"/run/containerd/containerd.pid", SystemdServices:"containerd*", DumpSystemdJournal:false, ImageServiceEndpoint:"", MasterOSDistro:"custom", NodeOSDistro:"custom", NodeOSArch:"amd64", VerifyServiceAccount:true, DeleteNamespace:true, DeleteNamespaceOnFailure:true, AllowedNotReadyNodes:-1, CleanStart:false, GatherKubeSystemResourceUsageData:"false", GatherLogsSizes:false, GatherMetricsAfterTest:"false", GatherSuiteMetricsAfterTest:false, MaxNodesToGather:0, IncludeClusterAutoscalerMetrics:false, OutputPrintType:"json", CreateTestingNS:(framework.CreateTestingNSFn)(0x105b8c8d0), DumpLogsOnFailure:true, DisableLogDump:false, LogexporterGCSPath:"", NodeTestContextType:framework.NodeTestContextType{NodeE2E:false, NodeName:"", NodeConformance:false, PrepullImages:false, ImageDescription:"", RuntimeConfig:map[string]string(nil), SystemSpecName:"", RestartKubelet:false, ExtraEnvs:map[string]string(nil), StandaloneMode:false, CriProxyEnabled:false}, ClusterDNSDomain:"cluster.local", NodeKiller:framework.NodeKillerConfig{Enabled:false, FailureRatio:0.01, Interval:60000000000, JitterFactor:60, SimulatedDowntime:600000000000, NodeKillerStopCtx:context.Context(nil), NodeKillerStop:(func())(nil)}, IPFamily:"ipv4", NonblockingTaints:"node-role.kubernetes.io/control-plane", ProgressReportURL:"", SriovdpConfigMapFile:"", SpecSummaryOutput:"", DockerConfigFile:"", E2EDockerConfigFile:"", KubeTestRepoList:"", SnapshotControllerPodName:"", SnapshotControllerHTTPPort:0, RequireDevices:false, EnabledVolumeDrivers:[]string(nil)}
  Running Suite:  - /Users/hongkliu/repo/openshift/origin
  =======================================================
  Random Seed: 1755123191 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [sig-instrumentation][Late] Platform Prometheus targets should not be accessible without auth [Serial]
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:96
    STEP: Creating a kubernetes client @ 08/13/25 18:13:12.763
  I0813 18:13:12.764250   97204 discovery.go:214] Invalidating discovery information
  I0813 18:13:13.951821 97204 client.go:286] configPath is now "/var/folders/xp/g5y5gl3525g7c3fvfc75lkk40000gn/T/configfile1752850284"
  I0813 18:13:13.951865 97204 client.go:361] The user is now "e2e-test-prometheus-qcjfz-user"
  I0813 18:13:13.951878 97204 client.go:363] Creating project "e2e-test-prometheus-qcjfz"
  I0813 18:13:14.113297 97204 client.go:371] Waiting on permissions in project "e2e-test-prometheus-qcjfz" ...
  I0813 18:13:14.505107 97204 client.go:400] DeploymentConfig capability is enabled, adding 'deployer' SA to the list of default SAs
  I0813 18:13:14.603059 97204 client.go:415] Waiting for ServiceAccount "default" to be provisioned...
  I0813 18:13:14.898568 97204 client.go:415] Waiting for ServiceAccount "builder" to be provisioned...
  I0813 18:13:15.195252 97204 client.go:415] Waiting for ServiceAccount "deployer" to be provisioned...
  I0813 18:13:15.492422 97204 client.go:425] Waiting for RoleBinding "system:image-pullers" to be provisioned...
  I0813 18:13:15.684801 97204 client.go:425] Waiting for RoleBinding "system:image-builders" to be provisioned...
  I0813 18:13:15.877899 97204 client.go:425] Waiting for RoleBinding "system:deployers" to be provisioned...
  I0813 18:13:16.418622 97204 client.go:458] Project "e2e-test-prometheus-qcjfz" has been fully provisioned.
    STEP: checking that targets reject the requests with 401 or 403 @ 08/13/25 18:13:16.725
  I0813 18:13:16.728791 97204 resource.go:361] Creating new exec pod
  I0813 18:13:22.070833 97204 prometheus.go:122] Checking via pod exec status code from the scaple url https://10.0.124.3:9099/metrics for pod openshift-cluster-version/cluster-version-operator-6dc4dd9689-zzmt8 without authorization (skip=false)
  I0813 18:13:22.071031 97204 builder.go:121] Running '/Users/hongkliu/bin/kubectl --server=https://api.ci-ln-lf956j2-76ef8.aws-2.ci.openshift.org:6443 --kubeconfig=/Users/hongkliu/.kube/config --namespace=e2e-test-prometheus-qcjfz exec execpod-targets-authorization -- /bin/sh -x -c curl -k -s -o /dev/null -w '%{http_code}' "https://10.0.124.3:9099/metrics"'
  I0813 18:13:23.602583 97204 builder.go:146] stderr: "+ curl -k -s -o /dev/null -w '%{http_code}' https://10.0.124.3:9099/metrics\n"
  I0813 18:13:23.602749 97204 builder.go:147] stdout: "401"
  I0813 18:13:23.602793 97204 prometheus.go:125] The scaple url https://10.0.124.3:9099/metrics for pod openshift-cluster-version/cluster-version-operator-6dc4dd9689-zzmt8 without authorization returned 401, <nil> (skip=false)
  I0813 18:13:23.806981 97204 client.go:674] Deleted {user.openshift.io/v1, Resource=users  e2e-test-prometheus-qcjfz-user}, err: <nil>
  I0813 18:13:23.906779 97204 client.go:674] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-prometheus-qcjfz}, err: <nil>
  I0813 18:13:24.006578 97204 client.go:674] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~JqbOm5T-gDzGlIicN4pA8CJhQG7omeNRmOgYsNTDYjs}, err: <nil>
    STEP: Destroying namespace "e2e-test-prometheus-qcjfz" for this suite. @ 08/13/25 18:13:24.007
  • [11.351 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 11.352 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
  {
    "name": "[sig-instrumentation][Late] Platform Prometheus targets should not be accessible without auth [Serial] [Suite:openshift/conformance/serial]",
    "lifecycle": "blocking",
    "duration": 11351,
    "startTime": "2025-08-13 22:13:12.754477 UTC",
    "endTime": "2025-08-13 22:13:24.106237 UTC",
    "result": "passed",
    "output": "  STEP: Creating a kubernetes client @ 08/13/25 18:13:12.763\nI0813 18:13:13.951821 97204 client.go:286] configPath is now \"/var/folders/xp/g5y5gl3525g7c3fvfc75lkk40000gn/T/configfile1752850284\"\nI0813 18:13:13.951865 97204 client.go:361] The user is now \"e2e-test-prometheus-qcjfz-user\"\nI0813 18:13:13.951878 97204 client.go:363] Creating project \"e2e-test-prometheus-qcjfz\"\nI0813 18:13:14.113297 97204 client.go:371] Waiting on permissions in project \"e2e-test-prometheus-qcjfz\" ...\nI0813 18:13:14.505107 97204 client.go:400] DeploymentConfig capability is enabled, adding 'deployer' SA to the list of default SAs\nI0813 18:13:14.603059 97204 client.go:415] Waiting for ServiceAccount \"default\" to be provisioned...\nI0813 18:13:14.898568 97204 client.go:415] Waiting for ServiceAccount \"builder\" to be provisioned...\nI0813 18:13:15.195252 97204 client.go:415] Waiting for ServiceAccount \"deployer\" to be provisioned...\nI0813 18:13:15.492422 97204 client.go:425] Waiting for RoleBinding \"system:image-pullers\" to be provisioned...\nI0813 18:13:15.684801 97204 client.go:425] Waiting for RoleBinding \"system:image-builders\" to be provisioned...\nI0813 18:13:15.877899 97204 client.go:425] Waiting for RoleBinding \"system:deployers\" to be provisioned...\nI0813 18:13:16.418622 97204 client.go:458] Project \"e2e-test-prometheus-qcjfz\" has been fully provisioned.\n  STEP: checking that targets reject the requests with 401 or 403 @ 08/13/25 18:13:16.725\nI0813 18:13:16.728791 97204 resource.go:361] Creating new exec pod\nI0813 18:13:22.070833 97204 prometheus.go:122] Checking via pod exec status code from the scaple url https://10.0.124.3:9099/metrics for pod openshift-cluster-version/cluster-version-operator-6dc4dd9689-zzmt8 without authorization (skip=false)\nI0813 18:13:22.071031 97204 builder.go:121] Running '/Users/hongkliu/bin/kubectl --server=https://api.ci-ln-lf956j2-76ef8.aws-2.ci.openshift.org:6443 --kubeconfig=/Users/hongkliu/.kube/config --namespace=e2e-test-prometheus-qcjfz exec execpod-targets-authorization -- /bin/sh -x -c curl -k -s -o /dev/null -w '%{http_code}' \"https://10.0.124.3:9099/metrics\"'\nI0813 18:13:23.602583 97204 builder.go:146] stderr: \"+ curl -k -s -o /dev/null -w '%{http_code}' https://10.0.124.3:9099/metrics\\n\"\nI0813 18:13:23.602749 97204 builder.go:147] stdout: \"401\"\nI0813 18:13:23.602793 97204 prometheus.go:125] The scaple url https://10.0.124.3:9099/metrics for pod openshift-cluster-version/cluster-version-operator-6dc4dd9689-zzmt8 without authorization returned 401, \u003cnil\u003e (skip=false)\nI0813 18:13:23.806981 97204 client.go:674] Deleted {user.openshift.io/v1, Resource=users  e2e-test-prometheus-qcjfz-user}, err: \u003cnil\u003e\nI0813 18:13:23.906779 97204 client.go:674] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-prometheus-qcjfz}, err: \u003cnil\u003e\nI0813 18:13:24.006578 97204 client.go:674] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~JqbOm5T-gDzGlIicN4pA8CJhQG7omeNRmOgYsNTDYjs}, err: \u003cnil\u003e\n  STEP: Destroying namespace \"e2e-test-prometheus-qcjfz\" for this suite. @ 08/13/25 18:13:24.007\n"
  }
]%

@openshift-trt
Copy link

openshift-trt bot commented Aug 14, 2025

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New tests seen in this PR at sha: 7df7a08

  • "[sig-instrumentation][Late] Platform Prometheus targets should not be accessible without auth [Serial] [Suite:openshift/conformance/serial]" [Total: 9, Pass: 9, Fail: 0, Flake: 0]

The bug OCPBUGS-57585 is in the VERIFIED state.
@openshift-trt
Copy link

openshift-trt bot commented Aug 15, 2025

Job Failure Risk Analysis for sha: 255dd87

Job Name Failure Risk
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-upgrade IncompleteTests

@hongkailiu
Copy link
Member Author

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 15, 2025
@hongkailiu hongkailiu changed the title Stop ignoring the targets from openshift-cluster-version NO-JIRA: Stop ignoring the targets from openshift-cluster-version Aug 15, 2025
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Aug 15, 2025
@openshift-ci-robot
Copy link

@hongkailiu: This pull request explicitly references no jira issue.

In response to this:

The bug OCPBUGS-57585 is in the VERIFIED state and the fix is built in the recent nightly.

Will rebase after #30014 gets in.

/hold

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@xueqzhan
Copy link
Contributor

/approve

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 15, 2025
@petr-muller
Copy link
Member

/lgtm

ugh origin is one of these repos

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 15, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 15, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hongkailiu, petr-muller, xueqzhan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@hongkailiu
Copy link
Member Author

/hold

Need to do some checking.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 15, 2025
@hongkailiu
Copy link
Member Author

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 18, 2025
@openshift-ci-robot
Copy link

/retest-required

Remaining retests: 0 against base HEAD 1bac1f6 and 2 for PR HEAD 255dd87 in total

1 similar comment
@openshift-ci-robot
Copy link

/retest-required

Remaining retests: 0 against base HEAD 1bac1f6 and 2 for PR HEAD 255dd87 in total

@openshift-ci-robot
Copy link

/retest-required

Remaining retests: 0 against base HEAD c5c90f8 and 1 for PR HEAD 255dd87 in total

@openshift-ci-robot
Copy link

/retest-required

Remaining retests: 0 against base HEAD c5c90f8 and 2 for PR HEAD 255dd87 in total

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 19, 2025

@hongkailiu: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-disruptive 255dd87 link false /test e2e-aws-disruptive
ci/prow/e2e-hypershift-conformance 255dd87 link false /test e2e-hypershift-conformance
ci/prow/okd-scos-e2e-aws-ovn 255dd87 link false /test okd-scos-e2e-aws-ovn
ci/prow/e2e-gcp-ovn-techpreview-serial-2of2 255dd87 link false /test e2e-gcp-ovn-techpreview-serial-2of2
ci/prow/e2e-aws-ovn-single-node-upgrade 255dd87 link false /test e2e-aws-ovn-single-node-upgrade
ci/prow/e2e-aws-ovn 255dd87 link false /test e2e-aws-ovn
ci/prow/e2e-aws-ovn-single-node 255dd87 link false /test e2e-aws-ovn-single-node
ci/prow/e2e-aws-proxy 255dd87 link false /test e2e-aws-proxy

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-ci-robot
Copy link

/retest-required

Remaining retests: 0 against base HEAD 2e1dac3 and 1 for PR HEAD 255dd87 in total

@sdodson sdodson merged commit b442041 into openshift:main Aug 19, 2025
37 of 47 checks passed
@openshift-ci-robot
Copy link

/retest-required

Remaining retests: 0 against base HEAD b442041 and 2 for PR HEAD 255dd87 in total

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: openshift-enterprise-tests
This PR has been included in build openshift-enterprise-tests-container-v4.20.0-202508192316.p0.gb442041.assembly.stream.el9.
All builds following this will include this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants