[INFO] [19:08:41-0700] Running tests against existing cluster... [INFO] [19:08:41-0700] Running parallel tests N= I0709 19:08:41.741853 10764 test.go:86] Extended test version v3.10.0-alpha.0+e63afaa-1228-dirty Running Suite: Extended ======================= Random Seed: 1531188522 - Will randomize all specs Will run 447 specs Running in parallel across 5 nodes I0709 19:08:43.830656 11717 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. Jul 9 19:08:43.830: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:08:43.832: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Jul 9 19:08:44.246: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 9 19:08:44.634: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 9 19:08:44.634: INFO: expected 7 pod replicas in namespace 'kube-system', 7 are Running and Ready. Jul 9 19:08:44.692: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Jul 9 19:08:44.692: INFO: Dumping network health container logs from all nodes... Jul 9 19:08:44.761: INFO: e2e test version: v1.10.0+b81c8f8 Jul 9 19:08:44.840: INFO: kube-apiserver version: v1.11.0+d4cacc0 I0709 19:08:44.840549 11717 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. SSS ------------------------------ I0709 19:08:44.845674 11716 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. I0709 19:08:44.850342 11714 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. S ------------------------------ I0709 19:08:44.859308 11713 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. I0709 19:08:44.859324 11748 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. SSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:08:44.852: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:08:47.067: INFO: About to run a Kube e2e test, ensuring namespace is privileged Jul 9 19:08:47.860: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jul 9 19:08:48.099: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-hostpath-mrzt2 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test hostPath mode Jul 9 19:08:48.437: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-mrzt2" to be "success or failure" Jul 9 19:08:48.465: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 27.784025ms Jul 9 19:08:50.532: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.095115151s STEP: Saw pod success Jul 9 19:08:50.532: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 9 19:08:50.602: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 9 19:08:50.751: INFO: Waiting for pod pod-host-path-test to disappear Jul 9 19:08:50.819: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:08:50.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-mrzt2" for this suite. Jul 9 19:08:57.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:01.300: INFO: namespace: e2e-tests-hostpath-mrzt2, resource: bindings, ignored listing per whitelist Jul 9 19:09:02.072: INFO: namespace e2e-tests-hostpath-mrzt2 deletion completed in 11.171547155s • [SLOW TEST:17.221 seconds] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:08:44.861: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:08:48.672: INFO: About to run a Kube e2e test, ensuring namespace is privileged Jul 9 19:08:49.441: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jul 9 19:08:49.693: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-hz5j2 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:08:51.143: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276" in namespace "e2e-tests-projected-hz5j2" to be "success or failure" Jul 9 19:08:51.191: INFO: Pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 48.433563ms Jul 9 19:08:53.252: INFO: Pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109326818s Jul 9 19:08:55.331: INFO: Pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.188430871s STEP: Saw pod success Jul 9 19:08:55.331: INFO: Pod "downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:08:55.414: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:08:55.588: INFO: Waiting for pod downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276 to disappear Jul 9 19:08:55.656: INFO: Pod downwardapi-volume-2fa0cddb-83e6-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:08:55.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hz5j2" for this suite. Jul 9 19:09:01.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:04.923: INFO: namespace: e2e-tests-projected-hz5j2, resource: bindings, ignored listing per whitelist Jul 9 19:09:07.026: INFO: namespace e2e-tests-projected-hz5j2 deletion completed in 11.313794635s • [SLOW TEST:22.165 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:86 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:08:44.863: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:08:47.936: INFO: About to run a Kube e2e test, ensuring namespace is privileged Jul 9 19:08:48.568: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jul 9 19:08:48.738: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-z59rs STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:86 Jul 9 19:08:49.815: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secret-namespace-txhdr STEP: Creating secret with name secret-test-2f0ac750-83e6-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:08:51.010: INFO: Waiting up to 5m0s for pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276" in namespace "e2e-tests-secrets-z59rs" to be "success or failure" Jul 9 19:08:51.046: INFO: Pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 36.426674ms Jul 9 19:08:53.109: INFO: Pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099545743s Jul 9 19:08:55.156: INFO: Pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146331244s STEP: Saw pod success Jul 9 19:08:55.156: INFO: Pod "pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:08:55.217: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276 container secret-volume-test: STEP: delete the pod Jul 9 19:08:55.370: INFO: Waiting for pod pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276 to disappear Jul 9 19:08:55.447: INFO: Pod pod-secrets-3037ca1c-83e6-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:08:55.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-z59rs" for this suite. Jul 9 19:09:01.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:04.411: INFO: namespace: e2e-tests-secrets-z59rs, resource: bindings, ignored listing per whitelist Jul 9 19:09:06.079: INFO: namespace e2e-tests-secrets-z59rs deletion completed in 10.554564859s STEP: Destroying namespace "e2e-tests-secret-namespace-txhdr" for this suite. Jul 9 19:09:12.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:15.304: INFO: namespace: e2e-tests-secret-namespace-txhdr, resource: bindings, ignored listing per whitelist Jul 9 19:09:15.666: INFO: namespace e2e-tests-secret-namespace-txhdr deletion completed in 9.58619834s • [SLOW TEST:30.803 seconds] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:86 ------------------------------ S ------------------------------ [sig-storage] Projected should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:02.074: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:09:04.463: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-fpcz7 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating projection with secret that has name projected-secret-test-map-38a11c87-83e6-11e8-8401-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:09:05.132: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-fpcz7" to be "success or failure" Jul 9 19:09:05.162: INFO: Pod "pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.038412ms Jul 9 19:09:07.190: INFO: Pod "pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058319067s STEP: Saw pod success Jul 9 19:09:07.190: INFO: Pod "pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:09:07.220: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276 container projected-secret-volume-test: STEP: delete the pod Jul 9 19:09:07.293: INFO: Waiting for pod pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:09:07.320: INFO: Pod pod-projected-secrets-38a63b92-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:09:07.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fpcz7" for this suite. Jul 9 19:09:13.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:16.255: INFO: namespace: e2e-tests-projected-fpcz7, resource: bindings, ignored listing per whitelist Jul 9 19:09:16.782: INFO: namespace e2e-tests-projected-fpcz7 deletion completed in 9.431912033s • [SLOW TEST:14.708 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-api-machinery] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:15.667: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:09:17.263: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-94pzx STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward api env vars Jul 9 19:09:18.065: INFO: Waiting up to 5m0s for pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276" in namespace "e2e-tests-downward-api-94pzx" to be "success or failure" Jul 9 19:09:18.094: INFO: Pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.033879ms Jul 9 19:09:20.138: INFO: Pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072997211s Jul 9 19:09:22.168: INFO: Pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103381227s STEP: Saw pod success Jul 9 19:09:22.168: INFO: Pod "downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:09:22.206: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276 container dapi-container: STEP: delete the pod Jul 9 19:09:22.277: INFO: Waiting for pod downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276 to disappear Jul 9 19:09:22.308: INFO: Pod downward-api-405b7c12-83e6-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:09:22.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-94pzx" for this suite. Jul 9 19:09:28.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:30.064: INFO: namespace: e2e-tests-downward-api-94pzx, resource: bindings, ignored listing per whitelist Jul 9 19:09:31.863: INFO: namespace e2e-tests-downward-api-94pzx deletion completed in 9.523567764s • [SLOW TEST:16.195 seconds] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:37 should provide default limits.cpu/memory from node allocatable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:187 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:07.027: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:09:08.916: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-wcrw4 STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:187 Jul 9 19:09:09.671: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node STEP: Creating configMap with name configmap-test-upd-3b60bc1b-83e6-11e8-992b-28d244b00276 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:09:13.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wcrw4" for this suite. Jul 9 19:09:36.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:37.910: INFO: namespace: e2e-tests-configmap-wcrw4, resource: bindings, ignored listing per whitelist Jul 9 19:09:40.347: INFO: namespace e2e-tests-configmap-wcrw4 deletion completed in 26.324645331s • [SLOW TEST:33.320 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:187 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified files with FSGroup ownership should support (root,0644,tmpfs) [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:57 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:31.864: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:09:33.500: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-q75bf STEP: Waiting for a default service account to be provisioned in namespace [It] files with FSGroup ownership should support (root,0644,tmpfs) [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:57 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 9 19:09:34.110: INFO: Waiting up to 5m0s for pod "pod-49ebef96-83e6-11e8-bd2e-28d244b00276" in namespace "e2e-tests-emptydir-q75bf" to be "success or failure" Jul 9 19:09:34.141: INFO: Pod "pod-49ebef96-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.003318ms Jul 9 19:09:36.172: INFO: Pod "pod-49ebef96-83e6-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062287969s STEP: Saw pod success Jul 9 19:09:36.172: INFO: Pod "pod-49ebef96-83e6-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:09:36.206: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-49ebef96-83e6-11e8-bd2e-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:09:36.273: INFO: Waiting for pod pod-49ebef96-83e6-11e8-bd2e-28d244b00276 to disappear Jul 9 19:09:36.303: INFO: Pod pod-49ebef96-83e6-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:09:36.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q75bf" for this suite. Jul 9 19:09:42.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:44.883: INFO: namespace: e2e-tests-emptydir-q75bf, resource: bindings, ignored listing per whitelist Jul 9 19:09:45.978: INFO: namespace e2e-tests-emptydir-q75bf deletion completed in 9.628539544s • [SLOW TEST:14.114 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 files with FSGroup ownership should support (root,0644,tmpfs) [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:57 ------------------------------ [k8s.io] Pods should be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:16.783: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:09:18.406: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-vvlc8 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 9 19:09:21.782: INFO: Successfully updated pod "pod-update-40eff97e-83e6-11e8-8401-28d244b00276" STEP: verifying the updated pod is in kubernetes Jul 9 19:09:21.840: INFO: Pod update OK [AfterEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:09:21.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vvlc8" for this suite. Jul 9 19:09:44.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:09:46.796: INFO: namespace: e2e-tests-pods-vvlc8, resource: bindings, ignored listing per whitelist Jul 9 19:09:47.464: INFO: namespace e2e-tests-pods-vvlc8 deletion completed in 25.509105364s • [SLOW TEST:30.681 seconds] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [Feature:Builds][Conformance] oc new-app should fail with a --name longer than 58 characters [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:66 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:40.350: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:09:42.725: INFO: configPath is now "/tmp/e2e-test-new-app-dk5fm-user.kubeconfig" Jul 9 19:09:42.725: INFO: The user is now "e2e-test-new-app-dk5fm-user" Jul 9 19:09:42.725: INFO: Creating project "e2e-test-new-app-dk5fm" Jul 9 19:09:42.839: INFO: Waiting on permissions in project "e2e-test-new-app-dk5fm" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:26 Jul 9 19:09:42.921: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:30 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:09:43.049: INFO: Running scan #0 Jul 9 19:09:43.049: INFO: Checking language ruby Jul 9 19:09:43.100: INFO: Checking tag 2.0 Jul 9 19:09:43.100: INFO: Checking tag 2.2 Jul 9 19:09:43.100: INFO: Checking tag 2.3 Jul 9 19:09:43.100: INFO: Checking tag 2.4 Jul 9 19:09:43.100: INFO: Checking tag 2.5 Jul 9 19:09:43.100: INFO: Checking tag latest Jul 9 19:09:43.100: INFO: Checking language nodejs Jul 9 19:09:43.138: INFO: Checking tag 0.10 Jul 9 19:09:43.138: INFO: Checking tag 4 Jul 9 19:09:43.138: INFO: Checking tag 6 Jul 9 19:09:43.138: INFO: Checking tag 8 Jul 9 19:09:43.138: INFO: Checking tag latest Jul 9 19:09:43.138: INFO: Checking language perl Jul 9 19:09:43.171: INFO: Checking tag 5.16 Jul 9 19:09:43.171: INFO: Checking tag 5.20 Jul 9 19:09:43.171: INFO: Checking tag 5.24 Jul 9 19:09:43.171: INFO: Checking tag latest Jul 9 19:09:43.171: INFO: Checking language php Jul 9 19:09:43.204: INFO: Checking tag 5.6 Jul 9 19:09:43.204: INFO: Checking tag 7.0 Jul 9 19:09:43.204: INFO: Checking tag 7.1 Jul 9 19:09:43.204: INFO: Checking tag latest Jul 9 19:09:43.204: INFO: Checking tag 5.5 Jul 9 19:09:43.204: INFO: Checking language python Jul 9 19:09:43.238: INFO: Checking tag latest Jul 9 19:09:43.238: INFO: Checking tag 2.7 Jul 9 19:09:43.238: INFO: Checking tag 3.3 Jul 9 19:09:43.238: INFO: Checking tag 3.4 Jul 9 19:09:43.238: INFO: Checking tag 3.5 Jul 9 19:09:43.238: INFO: Checking tag 3.6 Jul 9 19:09:43.238: INFO: Checking language wildfly Jul 9 19:09:43.272: INFO: Checking tag latest Jul 9 19:09:43.272: INFO: Checking tag 10.0 Jul 9 19:09:43.272: INFO: Checking tag 10.1 Jul 9 19:09:43.272: INFO: Checking tag 11.0 Jul 9 19:09:43.272: INFO: Checking tag 12.0 Jul 9 19:09:43.272: INFO: Checking tag 8.1 Jul 9 19:09:43.272: INFO: Checking tag 9.0 Jul 9 19:09:43.272: INFO: Checking language mysql Jul 9 19:09:43.303: INFO: Checking tag 5.5 Jul 9 19:09:43.303: INFO: Checking tag 5.6 Jul 9 19:09:43.303: INFO: Checking tag 5.7 Jul 9 19:09:43.303: INFO: Checking tag latest Jul 9 19:09:43.303: INFO: Checking language postgresql Jul 9 19:09:43.341: INFO: Checking tag 9.5 Jul 9 19:09:43.341: INFO: Checking tag 9.6 Jul 9 19:09:43.341: INFO: Checking tag latest Jul 9 19:09:43.341: INFO: Checking tag 9.2 Jul 9 19:09:43.341: INFO: Checking tag 9.4 Jul 9 19:09:43.341: INFO: Checking language mongodb Jul 9 19:09:43.382: INFO: Checking tag 3.4 Jul 9 19:09:43.382: INFO: Checking tag latest Jul 9 19:09:43.382: INFO: Checking tag 2.4 Jul 9 19:09:43.382: INFO: Checking tag 2.6 Jul 9 19:09:43.382: INFO: Checking tag 3.2 Jul 9 19:09:43.382: INFO: Checking language jenkins Jul 9 19:09:43.417: INFO: Checking tag 1 Jul 9 19:09:43.417: INFO: Checking tag 2 Jul 9 19:09:43.417: INFO: Checking tag latest Jul 9 19:09:43.417: INFO: Success! [It] should fail with a --name longer than 58 characters [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:66 STEP: calling oc new-app Jul 9 19:09:43.417: INFO: Running 'oc new-app --config=/tmp/e2e-test-new-app-dk5fm-user.kubeconfig --namespace=e2e-test-new-app-dk5fm https://github.com/openshift/nodejs-ex --name a2345678901234567890123456789012345678901234567890123456789' Jul 9 19:09:46.048: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc new-app --config=/tmp/e2e-test-new-app-dk5fm-user.kubeconfig --namespace=e2e-test-new-app-dk5fm https://github.com/openshift/nodejs-ex --name a2345678901234567890123456789012345678901234567890123456789] [] error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character. error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character. [] 0xc42105f740 exit status 1 true [0xc4200dc888 0xc4200dc8f8 0xc4200dc8f8] [0xc4200dc888 0xc4200dc8f8] [0xc4200dc890 0xc4200dc8f0] [0x916090 0x916190] 0xc4214ecd80 }: error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character. [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:40 [AfterEach] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:09:46.239: INFO: namespace : e2e-test-new-app-dk5fm api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:09:52.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:11.979 seconds] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:16 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:24 should fail with a --name longer than 58 characters [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:66 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:54 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:47.465: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:09:48.936: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-llm69 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:54 STEP: Creating configMap with name configmap-test-volume-532239eb-83e6-11e8-8401-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:09:49.596: INFO: Waiting up to 5m0s for pod "pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-configmap-llm69" to be "success or failure" Jul 9 19:09:49.630: INFO: Pod "pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.792324ms Jul 9 19:09:51.659: INFO: Pod "pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062907306s STEP: Saw pod success Jul 9 19:09:51.659: INFO: Pod "pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:09:51.698: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:09:51.762: INFO: Waiting for pod pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:09:51.799: INFO: Pod pod-configmaps-5326fb5a-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:09:51.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-llm69" for this suite. Jul 9 19:09:57.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:10:00.793: INFO: namespace: e2e-tests-configmap-llm69, resource: bindings, ignored listing per whitelist Jul 9 19:10:01.310: INFO: namespace e2e-tests-configmap-llm69 deletion completed in 9.47277623s • [SLOW TEST:13.845 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:54 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:444 Jul 9 19:10:01.576: INFO: Could not check network plugin name: exit status 1. Assuming a non-OpenShift plugin Jul 9 19:10:01.576: INFO: Not using one of the specified plugins [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:01.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.265 seconds] [Area:Networking] multicast /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:442 should allow multicast traffic in namespaces where it is enabled [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:45 Jul 9 19:10:01.576: Not using one of the specified plugins /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:166 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:08:44.845: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:08:47.331: INFO: About to run a Kube e2e test, ensuring namespace is privileged Jul 9 19:08:48.100: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jul 9 19:08:48.285: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-init-container-vc5jv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:40 [It] should not start app containers if init containers fail on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:166 STEP: creating the pod Jul 9 19:08:48.568: INFO: PodSpec: initContainers in spec.initContainers Jul 9 19:09:43.506: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2ec7c3dc-83e6-11e8-8fe2-28d244b00276", GenerateName:"", Namespace:"e2e-tests-init-container-vc5jv", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-vc5jv/pods/pod-init-2ec7c3dc-83e6-11e8-8fe2-28d244b00276", UID:"2edb82bb-83e6-11e8-84c6-0af96768d57e", ResourceVersion:"69577", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666785328, loc:(*time.Location)(0x6b11480)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"536063916"}, Annotations:map[string]string{"openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2dvtq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc4211d0e00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"busybox", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dvtq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(0xc4211d0f80), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"busybox", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dvtq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(0xc4211d1000), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause-amd64:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:31457280, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"31457280", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:31457280, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"31457280", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dvtq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc4211d0e80), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc4214a4a48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-10-0-130-54.us-west-2.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc4211d0ec0), ImagePullSecrets:[]v1.LocalObjectReference{v1.LocalObjectReference{Name:"default-dockercfg-7gp5s"}}, Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/memory-pressure", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785328, loc:(*time.Location)(0x6b11480)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785328, loc:(*time.Location)(0x6b11480)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785328, loc:(*time.Location)(0x6b11480)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.130.54", PodIP:"10.2.2.191", StartTime:(*v1.Time)(0xc4217b30e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc42045c700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc42045c770)}, Ready:false, RestartCount:3, Image:"busybox:latest", ImageID:"docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47", ContainerID:"docker://aad60e5897b9c2cfbf55c49ff779c85feaef0893f60edb22c46ab15ad8fae41a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc4217b3120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"busybox", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc4217b3100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause-amd64:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:09:43.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vc5jv" for this suite. Jul 9 19:10:05.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:10:08.083: INFO: namespace: e2e-tests-init-container-vc5jv, resource: bindings, ignored listing per whitelist Jul 9 19:10:09.484: INFO: namespace e2e-tests-init-container-vc5jv deletion completed in 25.913101336s • [SLOW TEST:84.639 seconds] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should not start app containers if init containers fail on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:166 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:01.581: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:10:03.428: INFO: configPath is now "/tmp/e2e-test-router-stress-p27nt-user.kubeconfig" Jul 9 19:10:03.428: INFO: The user is now "e2e-test-router-stress-p27nt-user" Jul 9 19:10:03.428: INFO: Creating project "e2e-test-router-stress-p27nt" Jul 9 19:10:03.577: INFO: Waiting on permissions in project "e2e-test-router-stress-p27nt" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:10:03.719: INFO: namespace : e2e-test-router-stress-p27nt api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:09.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [8.212 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68 should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:69 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48 ------------------------------ S ------------------------------ [sig-api-machinery] ConfigMap should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:09.795: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:10:11.209: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-m2zlr STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap e2e-tests-configmap-m2zlr/configmap-test-606abb83-83e6-11e8-8401-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:10:11.880: INFO: Waiting up to 5m0s for pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-configmap-m2zlr" to be "success or failure" Jul 9 19:10:11.909: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.02209ms Jul 9 19:10:13.938: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058136545s Jul 9 19:10:15.968: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087813715s Jul 9 19:10:17.997: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117318537s STEP: Saw pod success Jul 9 19:10:17.997: INFO: Pod "pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:10:18.025: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276 container env-test: STEP: delete the pod Jul 9 19:10:18.098: INFO: Waiting for pod pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:10:18.127: INFO: Pod pod-configmaps-606f137b-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:18.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m2zlr" for this suite. Jul 9 19:10:24.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:10:26.021: INFO: namespace: e2e-tests-configmap-m2zlr, resource: bindings, ignored listing per whitelist Jul 9 19:10:27.719: INFO: namespace e2e-tests-configmap-m2zlr deletion completed in 9.559471834s • [SLOW TEST:17.924 seconds] [sig-api-machinery] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:29 should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:52.331: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:09:54.259: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pod-network-test-2hdld STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2hdld STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 9 19:09:55.154: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 9 19:10:11.775: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | timeout -t 2 nc -w 1 -u 10.2.2.207 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-2hdld PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:10:11.775: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:10:13.217: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:13.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-2hdld" for this suite. Jul 9 19:10:35.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:10:39.766: INFO: namespace: e2e-tests-pod-network-test-2hdld, resource: bindings, ignored listing per whitelist Jul 9 19:10:39.884: INFO: namespace e2e-tests-pod-network-test-2hdld deletion completed in 26.620108146s • [SLOW TEST:47.553 seconds] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SSSS ------------------------------ [sig-storage] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:422 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:27.721: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:10:29.267: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-h97lg STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:422 STEP: Creating configMap with name projected-configmap-test-volume-6b2667d6-83e6-11e8-8401-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:10:29.896: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-h97lg" to be "success or failure" Jul 9 19:10:29.927: INFO: Pod "pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.894207ms Jul 9 19:10:31.958: INFO: Pod "pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06175169s STEP: Saw pod success Jul 9 19:10:31.958: INFO: Pod "pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:10:31.984: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:10:32.059: INFO: Waiting for pod pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:10:32.089: INFO: Pod pod-projected-configmaps-6b2b5fb4-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:32.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h97lg" for this suite. Jul 9 19:10:38.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:10:41.402: INFO: namespace: e2e-tests-projected-h97lg, resource: bindings, ignored listing per whitelist Jul 9 19:10:41.732: INFO: namespace e2e-tests-projected-h97lg deletion completed in 9.608230757s • [SLOW TEST:14.011 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:422 ------------------------------ [sig-storage] Projected updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:08:44.846: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:08:48.929: INFO: About to run a Kube e2e test, ensuring namespace is privileged Jul 9 19:08:49.775: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jul 9 19:08:50.089: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-b9w9s STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:08:50.558: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node STEP: Creating projection with configMap that has name projected-configmap-test-upd-2ffc6e90-83e6-11e8-881a-28d244b00276 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2ffc6e90-83e6-11e8-881a-28d244b00276 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:21.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b9w9s" for this suite. Jul 9 19:10:43.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:10:46.903: INFO: namespace: e2e-tests-projected-b9w9s, resource: bindings, ignored listing per whitelist Jul 9 19:10:48.827: INFO: namespace e2e-tests-projected-b9w9s deletion completed in 27.040637991s • [SLOW TEST:123.981 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] Projected should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:39.890: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:10:42.001: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-dw7b2 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name projected-configmap-test-volume-72d724e5-83e6-11e8-992b-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:10:42.805: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276" in namespace "e2e-tests-projected-dw7b2" to be "success or failure" Jul 9 19:10:42.846: INFO: Pod "pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 40.524707ms Jul 9 19:10:44.883: INFO: Pod "pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077334435s STEP: Saw pod success Jul 9 19:10:44.883: INFO: Pod "pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:10:44.933: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:10:45.039: INFO: Waiting for pod pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276 to disappear Jul 9 19:10:45.077: INFO: Pod pod-projected-configmaps-72dcf3bd-83e6-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:45.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dw7b2" for this suite. Jul 9 19:10:51.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:10:55.279: INFO: namespace: e2e-tests-projected-dw7b2, resource: bindings, ignored listing per whitelist Jul 9 19:10:55.603: INFO: namespace e2e-tests-projected-dw7b2 deletion completed in 10.483606757s • [SLOW TEST:15.713 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Downward API volume should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:41.733: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:10:43.292: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-t27sb STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:10:43.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-downward-api-t27sb" to be "success or failure" Jul 9 19:10:43.955: INFO: Pod "downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.758628ms Jul 9 19:10:45.984: INFO: Pod "downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060640477s STEP: Saw pod success Jul 9 19:10:45.984: INFO: Pod "downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:10:46.012: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:10:46.078: INFO: Waiting for pod downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:10:46.106: INFO: Pod downwardapi-volume-73875f2c-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:46.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t27sb" for this suite. Jul 9 19:10:52.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:10:55.554: INFO: namespace: e2e-tests-downward-api-t27sb, resource: bindings, ignored listing per whitelist Jul 9 19:10:55.783: INFO: namespace e2e-tests-downward-api-t27sb deletion completed in 9.633243258s • [SLOW TEST:14.050 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:10:55.784: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:55.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:55.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should allow connections to services in the default namespace from a pod in another namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:52 Jul 9 19:10:55.784: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ SS ------------------------------ [Feature:Builds][pruning] prune builds based on settings in the buildconfig [Conformance] buildconfigs should have a default history limit set when created via the group api [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:294 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:55.604: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:10:57.827: INFO: configPath is now "/tmp/e2e-test-build-pruning-hptxt-user.kubeconfig" Jul 9 19:10:57.827: INFO: The user is now "e2e-test-build-pruning-hptxt-user" Jul 9 19:10:57.827: INFO: Creating project "e2e-test-build-pruning-hptxt" Jul 9 19:10:57.977: INFO: Waiting on permissions in project "e2e-test-build-pruning-hptxt" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:37 Jul 9 19:10:58.038: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:41 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:10:58.171: INFO: Running scan #0 Jul 9 19:10:58.171: INFO: Checking language ruby Jul 9 19:10:58.202: INFO: Checking tag 2.0 Jul 9 19:10:58.202: INFO: Checking tag 2.2 Jul 9 19:10:58.202: INFO: Checking tag 2.3 Jul 9 19:10:58.202: INFO: Checking tag 2.4 Jul 9 19:10:58.202: INFO: Checking tag 2.5 Jul 9 19:10:58.202: INFO: Checking tag latest Jul 9 19:10:58.202: INFO: Checking language nodejs Jul 9 19:10:58.241: INFO: Checking tag 0.10 Jul 9 19:10:58.241: INFO: Checking tag 4 Jul 9 19:10:58.241: INFO: Checking tag 6 Jul 9 19:10:58.241: INFO: Checking tag 8 Jul 9 19:10:58.241: INFO: Checking tag latest Jul 9 19:10:58.241: INFO: Checking language perl Jul 9 19:10:58.280: INFO: Checking tag 5.16 Jul 9 19:10:58.280: INFO: Checking tag 5.20 Jul 9 19:10:58.280: INFO: Checking tag 5.24 Jul 9 19:10:58.280: INFO: Checking tag latest Jul 9 19:10:58.280: INFO: Checking language php Jul 9 19:10:58.318: INFO: Checking tag latest Jul 9 19:10:58.318: INFO: Checking tag 5.5 Jul 9 19:10:58.318: INFO: Checking tag 5.6 Jul 9 19:10:58.318: INFO: Checking tag 7.0 Jul 9 19:10:58.318: INFO: Checking tag 7.1 Jul 9 19:10:58.318: INFO: Checking language python Jul 9 19:10:58.375: INFO: Checking tag 2.7 Jul 9 19:10:58.375: INFO: Checking tag 3.3 Jul 9 19:10:58.375: INFO: Checking tag 3.4 Jul 9 19:10:58.375: INFO: Checking tag 3.5 Jul 9 19:10:58.375: INFO: Checking tag 3.6 Jul 9 19:10:58.375: INFO: Checking tag latest Jul 9 19:10:58.375: INFO: Checking language wildfly Jul 9 19:10:58.405: INFO: Checking tag 11.0 Jul 9 19:10:58.405: INFO: Checking tag 12.0 Jul 9 19:10:58.405: INFO: Checking tag 8.1 Jul 9 19:10:58.405: INFO: Checking tag 9.0 Jul 9 19:10:58.405: INFO: Checking tag latest Jul 9 19:10:58.405: INFO: Checking tag 10.0 Jul 9 19:10:58.405: INFO: Checking tag 10.1 Jul 9 19:10:58.405: INFO: Checking language mysql Jul 9 19:10:58.444: INFO: Checking tag 5.5 Jul 9 19:10:58.444: INFO: Checking tag 5.6 Jul 9 19:10:58.444: INFO: Checking tag 5.7 Jul 9 19:10:58.444: INFO: Checking tag latest Jul 9 19:10:58.444: INFO: Checking language postgresql Jul 9 19:10:58.476: INFO: Checking tag 9.5 Jul 9 19:10:58.476: INFO: Checking tag 9.6 Jul 9 19:10:58.476: INFO: Checking tag latest Jul 9 19:10:58.476: INFO: Checking tag 9.2 Jul 9 19:10:58.476: INFO: Checking tag 9.4 Jul 9 19:10:58.476: INFO: Checking language mongodb Jul 9 19:10:58.508: INFO: Checking tag 2.4 Jul 9 19:10:58.508: INFO: Checking tag 2.6 Jul 9 19:10:58.508: INFO: Checking tag 3.2 Jul 9 19:10:58.508: INFO: Checking tag 3.4 Jul 9 19:10:58.508: INFO: Checking tag latest Jul 9 19:10:58.508: INFO: Checking language jenkins Jul 9 19:10:58.547: INFO: Checking tag 1 Jul 9 19:10:58.547: INFO: Checking tag 2 Jul 9 19:10:58.547: INFO: Checking tag latest Jul 9 19:10:58.547: INFO: Success! STEP: creating test image stream Jul 9 19:10:58.547: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-hptxt-user.kubeconfig --namespace=e2e-test-build-pruning-hptxt -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/build-pruning/imagestream.yaml' imagestream.image.openshift.io "myphp" created [It] [Conformance] buildconfigs should have a default history limit set when created via the group api [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:294 STEP: creating a build config with the group api Jul 9 19:10:58.824: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-hptxt-user.kubeconfig --namespace=e2e-test-build-pruning-hptxt -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/build-pruning/default-group-build-config.yaml' buildconfig.build.openshift.io "myphp" created [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:56 [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:10:59.264: INFO: namespace : e2e-test-build-pruning-hptxt api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:05.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:9.748 seconds] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:21 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:35 [Conformance] buildconfigs should have a default history limit set when created via the group api [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:294 ------------------------------ [Feature:DeploymentConfig] deploymentconfigs should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1137 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:09.487: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:10:11.378: INFO: configPath is now "/tmp/e2e-test-cli-deployment-kj6d8-user.kubeconfig" Jul 9 19:10:11.378: INFO: The user is now "e2e-test-cli-deployment-kj6d8-user" Jul 9 19:10:11.378: INFO: Creating project "e2e-test-cli-deployment-kj6d8" Jul 9 19:10:11.495: INFO: Waiting on permissions in project "e2e-test-cli-deployment-kj6d8" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1137 STEP: should create ControllerRef in RCs it creates Jul 9 19:10:24.708: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete. STEP: releasing RCs that no longer match its selector STEP: adopting RCs that match its selector and have no ControllerRef STEP: deleting owned RCs when deleted [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1132 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:11:00.476: INFO: namespace : e2e-test-cli-deployment-kj6d8 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:06.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:57.069 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1130 should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1137 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:09:45.979: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:09:47.706: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-s5hdx STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [AfterEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:10:48.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-s5hdx" for this suite. Jul 9 19:11:10.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:11:13.589: INFO: namespace: e2e-tests-container-probe-s5hdx, resource: bindings, ignored listing per whitelist Jul 9 19:11:13.871: INFO: namespace e2e-tests-container-probe-s5hdx deletion completed in 25.444058596s • [SLOW TEST:87.892 seconds] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 with readiness probe that fails should never be ready and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:06.559: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:11:08.279: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-snx9g STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:11:08.898: INFO: Waiting up to 5m0s for pod "downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-snx9g" to be "success or failure" Jul 9 19:11:08.932: INFO: Pod "downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.604579ms Jul 9 19:11:10.971: INFO: Pod "downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073107651s STEP: Saw pod success Jul 9 19:11:10.971: INFO: Pod "downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:11:11.002: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:11:11.087: INFO: Waiting for pod downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276 to disappear Jul 9 19:11:11.118: INFO: Pod downwardapi-volume-826a7dd9-83e6-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:11.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-snx9g" for this suite. Jul 9 19:11:17.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:11:19.183: INFO: namespace: e2e-tests-downward-api-snx9g, resource: bindings, ignored listing per whitelist Jul 9 19:11:21.143: INFO: namespace e2e-tests-downward-api-snx9g deletion completed in 9.977340991s • [SLOW TEST:14.583 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:05.353: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:11:07.324: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-var-expansion-7r8ws STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test substitution in container's command Jul 9 19:11:08.163: INFO: Waiting up to 5m0s for pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276" in namespace "e2e-tests-var-expansion-7r8ws" to be "success or failure" Jul 9 19:11:08.214: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 50.78496ms Jul 9 19:11:10.317: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153374508s Jul 9 19:11:12.360: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196448037s Jul 9 19:11:14.398: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234583462s Jul 9 19:11:16.495: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331559739s Jul 9 19:11:18.535: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.371487095s STEP: Saw pod success Jul 9 19:11:18.535: INFO: Pod "var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:11:18.572: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276 container dapi-container: STEP: delete the pod Jul 9 19:11:18.662: INFO: Waiting for pod var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276 to disappear Jul 9 19:11:18.698: INFO: Pod var-expansion-81f9e7f4-83e6-11e8-992b-28d244b00276 no longer exists [AfterEach] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:18.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7r8ws" for this suite. Jul 9 19:11:24.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:11:28.792: INFO: namespace: e2e-tests-var-expansion-7r8ws, resource: bindings, ignored listing per whitelist Jul 9 19:11:29.280: INFO: namespace e2e-tests-var-expansion-7r8ws deletion completed in 10.540215906s • [SLOW TEST:23.927 seconds] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should allow substituting values in a container's command [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SSSS ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should override the route host with a custom value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:109 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:55.787: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:10:57.402: INFO: configPath is now "/tmp/e2e-test-router-scoped-smdsm-user.kubeconfig" Jul 9 19:10:57.402: INFO: The user is now "e2e-test-router-scoped-smdsm-user" Jul 9 19:10:57.402: INFO: Creating project "e2e-test-router-scoped-smdsm" Jul 9 19:10:57.610: INFO: Waiting on permissions in project "e2e-test-router-scoped-smdsm" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:48 Jul 9 19:10:57.705: INFO: Running 'oc new-app --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-smdsm -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "e2e-test-router-scoped-smdsm/" for "/tmp/fixture-testdata-dir180677416/test/extended/testdata/scoped-router.yaml" to project e2e-test-router-scoped-smdsm * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "router-scoped" created pod "router-override" created pod "router-override-domains" created rolebinding "system-router" created route "route-1" created route "route-2" created route "route-override-domain-1" created route "route-override-domain-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Access your application via route 'y.a.null.ptr' Access your application via route 'main.void.str' Run 'oc status' to view your app. [It] should override the route host with a custom value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:109 Jul 9 19:10:58.736: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir180677416/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Jul 9 19:11:07.875: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.2.2.224' "http://10.2.2.224:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jul 9 19:11:08.612: INFO: stderr: "" STEP: waiting for the valid route to respond Jul 9 19:11:08.613: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com' "http://10.2.2.224/Letter" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jul 9 19:11:15.466: INFO: stderr: "" STEP: checking that the stored domain name does not match a route Jul 9 19:11:15.466: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: first.example.com' "http://10.2.2.224/Letter"' Jul 9 19:11:16.104: INFO: stderr: "" STEP: checking that route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com matches a route Jul 9 19:11:16.104: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com' "http://10.2.2.224/Letter"' Jul 9 19:11:16.822: INFO: stderr: "" STEP: checking that route-2-e2e-test-router-scoped-smdsm.myapps.mycompany.com matches a route Jul 9 19:11:16.822: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-smdsm execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-2-e2e-test-router-scoped-smdsm.myapps.mycompany.com' "http://10.2.2.224/Letter"' Jul 9 19:11:17.497: INFO: stderr: "" STEP: checking that the router reported the correct ingress and override Jul 9 19:11:17.550: INFO: Selected: &route.RouteIngress{Host:"route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com", RouterName:"test-override", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc420ac0020)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}, All: []route.RouteIngress{route.RouteIngress{Host:"first.example.com", RouterName:"router", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc421923c00)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}, route.RouteIngress{Host:"first.example.com", RouterName:"test-override-domains", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc421923d60)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}, route.RouteIngress{Host:"first.example.com", RouterName:"test-scoped", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc421923ec0)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}, route.RouteIngress{Host:"route-1-e2e-test-router-scoped-smdsm.myapps.mycompany.com", RouterName:"test-override", Conditions:[]route.RouteIngressCondition{route.RouteIngressCondition{Type:"Admitted", Status:"True", Reason:"", Message:"", LastTransitionTime:(*v1.Time)(0xc420ac0020)}}, WildcardPolicy:"None", RouterCanonicalHostname:""}} [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:11:17.645: INFO: namespace : e2e-test-router-scoped-smdsm api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:29.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:33.943 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:26 The HAProxy router /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:67 should override the route host with a custom value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:109 ------------------------------ S ------------------------------ [sig-storage] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:21.144: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:11:22.818: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-8qd2x STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:11:23.493: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-8qd2x" to be "success or failure" Jul 9 19:11:23.539: INFO: Pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 46.274665ms Jul 9 19:11:25.582: INFO: Pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08953228s Jul 9 19:11:27.618: INFO: Pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125776966s STEP: Saw pod success Jul 9 19:11:27.618: INFO: Pod "downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:11:27.651: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:11:27.733: INFO: Waiting for pod downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276 to disappear Jul 9 19:11:27.763: INFO: Pod downwardapi-volume-8b1ddb7d-83e6-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:27.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8qd2x" for this suite. Jul 9 19:11:33.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:11:37.118: INFO: namespace: e2e-tests-projected-8qd2x, resource: bindings, ignored listing per whitelist Jul 9 19:11:37.781: INFO: namespace e2e-tests-projected-8qd2x deletion completed in 9.97551978s • [SLOW TEST:16.637 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:29.284: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:11:31.450: INFO: configPath is now "/tmp/e2e-test-router-reencrypt-rvndf-user.kubeconfig" Jul 9 19:11:31.450: INFO: The user is now "e2e-test-router-reencrypt-rvndf-user" Jul 9 19:11:31.450: INFO: Creating project "e2e-test-router-reencrypt-rvndf" Jul 9 19:11:31.579: INFO: Waiting on permissions in project "e2e-test-router-reencrypt-rvndf" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:41 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:29 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:11:31.726: INFO: namespace : e2e-test-router-reencrypt-rvndf api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:37.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [8.543 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:18 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:52 should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:53 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:44 ------------------------------ SSS ------------------------------ [Conformance][templates] templateinstance cross-namespace test should create and delete objects across namespaces [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:30 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:29.732: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:11:31.293: INFO: configPath is now "/tmp/e2e-test-templates-b585v-user.kubeconfig" Jul 9 19:11:31.293: INFO: The user is now "e2e-test-templates-b585v-user" Jul 9 19:11:31.293: INFO: Creating project "e2e-test-templates-b585v" Jul 9 19:11:31.440: INFO: Waiting on permissions in project "e2e-test-templates-b585v" ... [BeforeEach] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:31.487: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:11:33.023: INFO: configPath is now "/tmp/e2e-test-templates2-nghzw-user.kubeconfig" Jul 9 19:11:33.023: INFO: The user is now "e2e-test-templates2-nghzw-user" Jul 9 19:11:33.023: INFO: Creating project "e2e-test-templates2-nghzw" Jul 9 19:11:33.263: INFO: Waiting on permissions in project "e2e-test-templates2-nghzw" ... [It] should create and delete objects across namespaces [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:30 Jul 9 19:11:33.304: INFO: Running 'oc adm --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-templates2-nghzw policy add-role-to-user admin e2e-test-templates-b585v-user' role "admin" added: "e2e-test-templates-b585v-user" STEP: creating the templateinstance STEP: deleting the templateinstance [AfterEach] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:11:35.637: INFO: namespace : e2e-test-templates-b585v api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:41.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:11:41.769: INFO: namespace : e2e-test-templates2-nghzw api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:47.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:18.111 seconds] [Conformance][templates] templateinstance cross-namespace test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:22 should create and delete objects across namespaces [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_cross_namespace.go:30 ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified nonexistent volume subPath should have the correct mode and owner using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:53 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:37.783: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:11:39.394: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-t499c STEP: Waiting for a default service account to be provisioned in namespace [It] nonexistent volume subPath should have the correct mode and owner using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:53 STEP: Creating a pod to test emptydir subpath on tmpfs Jul 9 19:11:40.061: INFO: Waiting up to 5m0s for pod "pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-emptydir-t499c" to be "success or failure" Jul 9 19:11:40.092: INFO: Pod "pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.167765ms Jul 9 19:11:42.123: INFO: Pod "pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062039439s STEP: Saw pod success Jul 9 19:11:42.123: INFO: Pod "pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:11:42.162: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:11:42.262: INFO: Waiting for pod pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276 to disappear Jul 9 19:11:42.292: INFO: Pod pod-94fd2bd8-83e6-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:42.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-t499c" for this suite. Jul 9 19:11:48.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:11:50.328: INFO: namespace: e2e-tests-emptydir-t499c, resource: bindings, ignored listing per whitelist Jul 9 19:11:52.155: INFO: namespace e2e-tests-emptydir-t499c deletion completed in 9.826706369s • [SLOW TEST:14.372 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 nonexistent volume subPath should have the correct mode and owner using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:53 ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified volume on default medium should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:61 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:37.830: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:11:39.688: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-kmg6b STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:61 STEP: Creating a pod to test emptydir volume type on node default medium Jul 9 19:11:40.434: INFO: Waiting up to 5m0s for pod "pod-95357f3c-83e6-11e8-992b-28d244b00276" in namespace "e2e-tests-emptydir-kmg6b" to be "success or failure" Jul 9 19:11:40.471: INFO: Pod "pod-95357f3c-83e6-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 36.789536ms Jul 9 19:11:42.542: INFO: Pod "pod-95357f3c-83e6-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.108162582s STEP: Saw pod success Jul 9 19:11:42.542: INFO: Pod "pod-95357f3c-83e6-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:11:42.581: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-95357f3c-83e6-11e8-992b-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:11:42.668: INFO: Waiting for pod pod-95357f3c-83e6-11e8-992b-28d244b00276 to disappear Jul 9 19:11:42.704: INFO: Pod pod-95357f3c-83e6-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:42.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kmg6b" for this suite. Jul 9 19:11:48.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:11:52.324: INFO: namespace: e2e-tests-emptydir-kmg6b, resource: bindings, ignored listing per whitelist Jul 9 19:11:53.097: INFO: namespace e2e-tests-emptydir-kmg6b deletion completed in 10.351115117s • [SLOW TEST:15.267 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 volume on default medium should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:61 ------------------------------ [sig-storage] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:47.844: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:11:49.287: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-gr2vb STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating projection with secret that has name projected-secret-test-9ada9ad5-83e6-11e8-8401-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:11:49.927: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-gr2vb" to be "success or failure" Jul 9 19:11:49.955: INFO: Pod "pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 27.730422ms Jul 9 19:11:51.982: INFO: Pod "pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.055067008s STEP: Saw pod success Jul 9 19:11:51.982: INFO: Pod "pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:11:52.009: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276 container projected-secret-volume-test: STEP: delete the pod Jul 9 19:11:52.115: INFO: Waiting for pod pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:11:52.148: INFO: Pod pod-projected-secrets-9adfcf80-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:52.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gr2vb" for this suite. Jul 9 19:11:58.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:12:00.976: INFO: namespace: e2e-tests-projected-gr2vb, resource: bindings, ignored listing per whitelist Jul 9 19:12:02.037: INFO: namespace e2e-tests-projected-gr2vb deletion completed in 9.85639901s • [SLOW TEST:14.194 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables should fail resolving unresolvable valueFrom in docker build environment variable references [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:122 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:53.098: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:11:55.298: INFO: configPath is now "/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig" Jul 9 19:11:55.298: INFO: The user is now "e2e-test-build-valuefrom-q75fk-user" Jul 9 19:11:55.298: INFO: Creating project "e2e-test-build-valuefrom-q75fk" Jul 9 19:11:55.416: INFO: Waiting on permissions in project "e2e-test-build-valuefrom-q75fk" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:27 Jul 9 19:11:55.477: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:38 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:11:55.628: INFO: Running scan #0 Jul 9 19:11:55.628: INFO: Checking language ruby Jul 9 19:11:55.683: INFO: Checking tag 2.0 Jul 9 19:11:55.683: INFO: Checking tag 2.2 Jul 9 19:11:55.683: INFO: Checking tag 2.3 Jul 9 19:11:55.683: INFO: Checking tag 2.4 Jul 9 19:11:55.683: INFO: Checking tag 2.5 Jul 9 19:11:55.683: INFO: Checking tag latest Jul 9 19:11:55.683: INFO: Checking language nodejs Jul 9 19:11:55.725: INFO: Checking tag 0.10 Jul 9 19:11:55.725: INFO: Checking tag 4 Jul 9 19:11:55.725: INFO: Checking tag 6 Jul 9 19:11:55.725: INFO: Checking tag 8 Jul 9 19:11:55.725: INFO: Checking tag latest Jul 9 19:11:55.725: INFO: Checking language perl Jul 9 19:11:55.757: INFO: Checking tag 5.16 Jul 9 19:11:55.757: INFO: Checking tag 5.20 Jul 9 19:11:55.757: INFO: Checking tag 5.24 Jul 9 19:11:55.757: INFO: Checking tag latest Jul 9 19:11:55.757: INFO: Checking language php Jul 9 19:11:55.789: INFO: Checking tag 7.1 Jul 9 19:11:55.789: INFO: Checking tag latest Jul 9 19:11:55.789: INFO: Checking tag 5.5 Jul 9 19:11:55.789: INFO: Checking tag 5.6 Jul 9 19:11:55.789: INFO: Checking tag 7.0 Jul 9 19:11:55.789: INFO: Checking language python Jul 9 19:11:55.825: INFO: Checking tag 3.4 Jul 9 19:11:55.825: INFO: Checking tag 3.5 Jul 9 19:11:55.825: INFO: Checking tag 3.6 Jul 9 19:11:55.825: INFO: Checking tag latest Jul 9 19:11:55.825: INFO: Checking tag 2.7 Jul 9 19:11:55.825: INFO: Checking tag 3.3 Jul 9 19:11:55.825: INFO: Checking language wildfly Jul 9 19:11:55.860: INFO: Checking tag 9.0 Jul 9 19:11:55.860: INFO: Checking tag latest Jul 9 19:11:55.860: INFO: Checking tag 10.0 Jul 9 19:11:55.860: INFO: Checking tag 10.1 Jul 9 19:11:55.860: INFO: Checking tag 11.0 Jul 9 19:11:55.860: INFO: Checking tag 12.0 Jul 9 19:11:55.860: INFO: Checking tag 8.1 Jul 9 19:11:55.860: INFO: Checking language mysql Jul 9 19:11:55.890: INFO: Checking tag latest Jul 9 19:11:55.890: INFO: Checking tag 5.5 Jul 9 19:11:55.890: INFO: Checking tag 5.6 Jul 9 19:11:55.890: INFO: Checking tag 5.7 Jul 9 19:11:55.890: INFO: Checking language postgresql Jul 9 19:11:55.924: INFO: Checking tag latest Jul 9 19:11:55.924: INFO: Checking tag 9.2 Jul 9 19:11:55.924: INFO: Checking tag 9.4 Jul 9 19:11:55.924: INFO: Checking tag 9.5 Jul 9 19:11:55.924: INFO: Checking tag 9.6 Jul 9 19:11:55.924: INFO: Checking language mongodb Jul 9 19:11:55.963: INFO: Checking tag 3.4 Jul 9 19:11:55.963: INFO: Checking tag latest Jul 9 19:11:55.963: INFO: Checking tag 2.4 Jul 9 19:11:55.963: INFO: Checking tag 2.6 Jul 9 19:11:55.963: INFO: Checking tag 3.2 Jul 9 19:11:55.963: INFO: Checking language jenkins Jul 9 19:11:55.996: INFO: Checking tag latest Jul 9 19:11:55.996: INFO: Checking tag 1 Jul 9 19:11:55.996: INFO: Checking tag 2 Jul 9 19:11:55.996: INFO: Success! STEP: creating test image stream Jul 9 19:11:55.996: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/valuefrom/test-is.json' imagestream.image.openshift.io "test" created STEP: creating test secret Jul 9 19:11:56.351: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/valuefrom/test-secret.yaml' secret "mysecret" created STEP: creating test configmap Jul 9 19:11:56.911: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/valuefrom/test-configmap.yaml' configmap "myconfigmap" created [It] should fail resolving unresolvable valueFrom in docker build environment variable references [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:122 STEP: creating test build config Jul 9 19:11:57.312: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/valuefrom/failed-docker-build-value-from-config.yaml' buildconfig.build.openshift.io "mydockertest" created STEP: starting test build Jul 9 19:11:57.636: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-valuefrom-q75fk-user.kubeconfig --namespace=e2e-test-build-valuefrom-q75fk mydockertest -o=name' Jul 9 19:11:57.927: INFO: start-build output with args [mydockertest -o=name]: Error> StdOut> build/mydockertest-1 StdErr> Jul 9 19:11:57.928: INFO: Waiting for mydockertest-1 to complete Jul 9 19:12:04.011: INFO: WaitForABuild returning with error: The build "mydockertest-1" status is "Error" Jul 9 19:12:04.011: INFO: Done waiting for mydockertest-1: util.BuildResult{BuildPath:"build/mydockertest-1", BuildName:"mydockertest-1", StartBuildStdErr:"", StartBuildStdOut:"build/mydockertest-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421470300), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc420dca1e0)} with error: The build "mydockertest-1" status is "Error" [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:31 [AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:12:04.100: INFO: namespace : e2e-test-build-valuefrom-q75fk api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:12:10.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:17.089 seconds] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:13 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:26 should fail resolving unresolvable valueFrom in docker build environment variable references [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:122 ------------------------------ SS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431 Jul 9 19:12:10.506: INFO: Could not check network plugin name: exit status 1. Assuming a non-OpenShift plugin Jul 9 19:12:10.506: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:12:10.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.316 seconds] NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430 should enforce policy based on PodSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:86 Jul 9 19:12:10.506: This plugin does not implement NetworkPolicy. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:12:02.041: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:12:03.665: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-wvqk5 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should allow activeDeadlineSeconds to be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 9 19:12:07.146: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276" Jul 9 19:12:07.146: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-pods-wvqk5" to be "terminated due to deadline exceeded" Jul 9 19:12:07.174: INFO: Pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276": Phase="Running", Reason="", readiness=true. Elapsed: 27.773489ms Jul 9 19:12:09.205: INFO: Pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.058771469s Jul 9 19:12:09.205: INFO: Pod "pod-update-activedeadlineseconds-a379c7c5-83e6-11e8-8401-28d244b00276" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:12:09.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wvqk5" for this suite. Jul 9 19:12:15.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:12:17.208: INFO: namespace: e2e-tests-pods-wvqk5, resource: bindings, ignored listing per whitelist Jul 9 19:12:18.839: INFO: namespace e2e-tests-pods-wvqk5 deletion completed in 9.598590478s • [SLOW TEST:16.798 seconds] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should allow activeDeadlineSeconds to be updated [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [k8s.io] InitContainer should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:13.874: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:11:15.455: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-init-container-9dmbk STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:40 [It] should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103 STEP: creating the pod Jul 9 19:11:16.159: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:11:56.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9dmbk" for this suite. Jul 9 19:12:18.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:12:20.767: INFO: namespace: e2e-tests-init-container-9dmbk, resource: bindings, ignored listing per whitelist Jul 9 19:12:22.186: INFO: namespace e2e-tests-init-container-9dmbk deletion completed in 25.976091167s • [SLOW TEST:68.313 seconds] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103 ------------------------------ SS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:12:22.189: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:12:24.143: INFO: configPath is now "/tmp/e2e-test-resolve-local-names-x2lpn-user.kubeconfig" Jul 9 19:12:24.143: INFO: The user is now "e2e-test-resolve-local-names-x2lpn-user" Jul 9 19:12:24.143: INFO: Creating project "e2e-test-resolve-local-names-x2lpn" Jul 9 19:12:24.351: INFO: Waiting on permissions in project "e2e-test-resolve-local-names-x2lpn" ... [It] should perform lookup when the pod has the resolve-names annotation [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:73 Jul 9 19:12:24.400: INFO: Running 'oc import-image --config=/tmp/e2e-test-resolve-local-names-x2lpn-user.kubeconfig --namespace=e2e-test-resolve-local-names-x2lpn busybox:latest --confirm' The import completed successfully. Name: busybox Namespace: e2e-test-resolve-local-names-x2lpn Created: Less than a second ago Labels: Annotations: openshift.io/image.dockerRepositoryCheck=2018-07-10T02:12:26Z Docker Pull Spec: docker-registry.default.svc:5000/e2e-test-resolve-local-names-x2lpn/busybox Image Lookup: local=false Unique Images: 1 Tags: 1 latest tagged from busybox:latest * busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 Less than a second ago Image Name: busybox:latest Docker Image: busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 Name: sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 Created: Less than a second ago Annotations: image.openshift.io/dockerLayersOrder=ascending Image Size: 724.6kB Image Created: 6 weeks ago Author: Arch: amd64 Command: sh Working Dir: User: Exposes Ports: Docker Labels: Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin [AfterEach] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:12:26.312: INFO: namespace : e2e-test-resolve-local-names-x2lpn api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:12:32.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] [10.191 seconds] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:14 should perform lookup when the pod has the resolve-names annotation [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:73 default image resolution is not configured, can't verify pod resolution /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:99 ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with multiple image change triggers [Conformance] should run a successful deployment with multiple triggers [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:513 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:10:48.830: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:10:51.252: INFO: configPath is now "/tmp/e2e-test-cli-deployment-92vwf-user.kubeconfig" Jul 9 19:10:51.252: INFO: The user is now "e2e-test-cli-deployment-92vwf-user" Jul 9 19:10:51.252: INFO: Creating project "e2e-test-cli-deployment-92vwf" Jul 9 19:10:51.410: INFO: Waiting on permissions in project "e2e-test-cli-deployment-92vwf" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should run a successful deployment with multiple triggers [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:513 STEP: creating DC STEP: verifying the deployment is marked complete Jul 9 19:11:52.159: INFO: Latest rollout of dc/example (rc/example-1) is complete. [AfterEach] with multiple image change triggers [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:509 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:11:54.263: INFO: namespace : e2e-test-cli-deployment-92vwf api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:12:34.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:105.552 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with multiple image change triggers [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:507 should run a successful deployment with multiple triggers [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:513 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:12:34.383: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:12:36.680: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-hostpath-tscb6 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should support existing single file subPath [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:167 Jul 9 19:12:37.452: INFO: No SSH Key for provider : 'GetSigner(...) not implemented for ' [AfterEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:12:37.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-tscb6" for this suite. Jul 9 19:12:43.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:12:45.782: INFO: namespace: e2e-tests-hostpath-tscb6, resource: bindings, ignored listing per whitelist Jul 9 19:12:48.312: INFO: namespace e2e-tests-hostpath-tscb6 deletion completed in 10.796412144s S [SKIPPING] [13.929 seconds] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should support existing single file subPath [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:167 Jul 9 19:12:37.452: No SSH Key for provider : 'GetSigner(...) not implemented for ' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:470 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:12:10.508: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:12:12.627: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-9775r STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should support remote command execution over websockets [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:470 Jul 9 19:12:13.409: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:12:17.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9775r" for this suite. Jul 9 19:12:56.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:12:59.715: INFO: namespace: e2e-tests-pods-9775r, resource: bindings, ignored listing per whitelist Jul 9 19:13:00.520: INFO: namespace e2e-tests-pods-9775r deletion completed in 42.593555306s • [SLOW TEST:50.012 seconds] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should support remote command execution over websockets [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:470 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:13:00.521: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:00.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:00.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should prevent connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:40 Jul 9 19:13:00.521: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [Conformance][templates] templateinstance impersonation tests should pass impersonation update tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:252 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:12:48.316: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:12:50.508: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-user.kubeconfig" Jul 9 19:12:50.508: INFO: The user is now "e2e-test-templates-b5fkm-user" Jul 9 19:12:50.508: INFO: Creating project "e2e-test-templates-b5fkm" Jul 9 19:12:50.659: INFO: Waiting on permissions in project "e2e-test-templates-b5fkm" ... [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:57 Jul 9 19:12:51.908: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-adminuser.kubeconfig" Jul 9 19:12:52.180: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-impersonateuser.kubeconfig" Jul 9 19:12:52.429: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-impersonatebygroupuser.kubeconfig" Jul 9 19:12:52.677: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-edituser1.kubeconfig" Jul 9 19:12:52.922: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-edituser2.kubeconfig" Jul 9 19:12:53.178: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-viewuser.kubeconfig" Jul 9 19:12:53.434: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-impersonatebygroupuser.kubeconfig" [It] should pass impersonation update tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:252 STEP: testing as system:admin user STEP: testing as e2e-test-templates-b5fkm-adminuser user Jul 9 19:12:54.343: INFO: configPath is now "/tmp/e2e-test-templates-b5fkm-adminuser.kubeconfig" [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:12:54.783: INFO: namespace : e2e-test-templates-b5fkm api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:13:00.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:221 • Failure [12.949 seconds] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:27 should pass impersonation update tests [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:252 Expected an error to have occurred. Got: : nil /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:322 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:12:18.841: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:12:20.604: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pod-network-test-7j6f8 STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7j6f8 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 9 19:12:21.211: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 9 19:12:43.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.2.2.242:8080/dial?request=hostName&protocol=http&host=10.2.2.238&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-7j6f8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:12:43.789: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:12:44.203: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:12:44.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-7j6f8" for this suite. Jul 9 19:13:06.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:08.366: INFO: namespace: e2e-tests-pod-network-test-7j6f8, resource: bindings, ignored listing per whitelist Jul 9 19:13:09.624: INFO: namespace e2e-tests-pod-network-test-7j6f8 deletion completed in 25.380544553s • [SLOW TEST:50.783 seconds] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:01.265: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:03.565: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-7mdjl STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 9 19:13:04.438: INFO: Waiting up to 5m0s for pod "pod-c740ba1e-83e6-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-7mdjl" to be "success or failure" Jul 9 19:13:04.481: INFO: Pod "pod-c740ba1e-83e6-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 43.665856ms Jul 9 19:13:06.523: INFO: Pod "pod-c740ba1e-83e6-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085696002s STEP: Saw pod success Jul 9 19:13:06.523: INFO: Pod "pod-c740ba1e-83e6-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:13:06.566: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-c740ba1e-83e6-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:13:06.673: INFO: Waiting for pod pod-c740ba1e-83e6-11e8-881a-28d244b00276 to disappear Jul 9 19:13:06.717: INFO: Pod pod-c740ba1e-83e6-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:06.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7mdjl" for this suite. Jul 9 19:13:12.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:15.123: INFO: namespace: e2e-tests-emptydir-7mdjl, resource: bindings, ignored listing per whitelist Jul 9 19:13:17.537: INFO: namespace e2e-tests-emptydir-7mdjl deletion completed in 10.770969057s • [SLOW TEST:16.271 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs paused [Conformance] should disable actions on deployments [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:742 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:12:32.382: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:12:34.249: INFO: configPath is now "/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig" Jul 9 19:12:34.249: INFO: The user is now "e2e-test-cli-deployment-lzhmc-user" Jul 9 19:12:34.249: INFO: Creating project "e2e-test-cli-deployment-lzhmc" Jul 9 19:12:34.415: INFO: Waiting on permissions in project "e2e-test-cli-deployment-lzhmc" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should disable actions on deployments [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:742 STEP: verifying that we cannot start a new deployment via oc deploy Jul 9 19:12:34.793: INFO: Running 'oc deploy --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --latest' Jul 9 19:12:35.082: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc deploy --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --latest] [] Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead. Flag --latest has been deprecated, use 'oc rollout latest' instead error: cannot deploy a paused deployment config Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead. Flag --latest has been deprecated, use 'oc rollout latest' instead error: cannot deploy a paused deployment config [] 0xc421067200 exit status 1 true [0xc420efe310 0xc420efe390 0xc420efe390] [0xc420efe310 0xc420efe390] [0xc420efe318 0xc420efe370] [0x916090 0x916190] 0xc420ea0600 }: Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead. Flag --latest has been deprecated, use 'oc rollout latest' instead error: cannot deploy a paused deployment config STEP: verifying that we cannot start a new deployment via oc rollout Jul 9 19:12:35.082: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc latest dc/paused' Jul 9 19:12:35.319: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc latest dc/paused] [] error: cannot deploy a paused deployment config error: cannot deploy a paused deployment config [] 0xc42090aed0 exit status 1 true [0xc4219600c0 0xc4219600e8 0xc4219600e8] [0xc4219600c0 0xc4219600e8] [0xc4219600c8 0xc4219600e0] [0x916090 0x916190] 0xc42199b9e0 }: error: cannot deploy a paused deployment config STEP: verifying that we cannot cancel a deployment Jul 9 19:12:35.319: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc cancel dc/paused' Jul 9 19:12:35.670: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc cancel dc/paused] [] unable to cancel paused deployment e2e-test-cli-deployment-lzhmc/paused there have been no replication controllers for e2e-test-cli-deployment-lzhmc/paused unable to cancel paused deployment e2e-test-cli-deployment-lzhmc/paused there have been no replication controllers for e2e-test-cli-deployment-lzhmc/paused [] 0xc42090b3b0 exit status 1 true [0xc4219600f8 0xc421960128 0xc421960128] [0xc4219600f8 0xc421960128] [0xc421960100 0xc421960118] [0x916090 0x916190] 0xc42199baa0 }: unable to cancel paused deployment e2e-test-cli-deployment-lzhmc/paused there have been no replication controllers for e2e-test-cli-deployment-lzhmc/paused STEP: verifying that we cannot retry a deployment Jul 9 19:12:35.670: INFO: Running 'oc deploy --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --retry' Jul 9 19:12:35.890: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc deploy --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --retry] [] Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead. error: cannot retry a paused deployment config Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead. error: cannot retry a paused deployment config [] 0xc42090b860 exit status 1 true [0xc421960130 0xc421960200 0xc421960200] [0xc421960130 0xc421960200] [0xc421960138 0xc4219601f0] [0x916090 0x916190] 0xc42199bb60 }: Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead. error: cannot retry a paused deployment config STEP: verifying that we cannot rollout retry a deployment Jul 9 19:12:35.890: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc retry dc/paused' Jul 9 19:12:36.152: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc retry dc/paused] [] error: unable to retry paused deployment config "paused" error: unable to retry paused deployment config "paused" [] 0xc42090bce0 exit status 1 true [0xc421960210 0xc421960280 0xc421960280] [0xc421960210 0xc421960280] [0xc421960220 0xc421960270] [0x916090 0x916190] 0xc42199bc20 }: error: unable to retry paused deployment config "paused" STEP: verifying that we cannot rollback a deployment Jul 9 19:12:36.152: INFO: Running 'oc rollback --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --to-version 1' Jul 9 19:12:36.396: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollback --config=/tmp/e2e-test-cli-deployment-lzhmc-user.kubeconfig --namespace=e2e-test-cli-deployment-lzhmc dc/paused --to-version 1] [] error: cannot rollback a paused deployment config error: cannot rollback a paused deployment config [] 0xc4210ecb70 exit status 1 true [0xc421af61e8 0xc421af6220 0xc421af6220] [0xc421af61e8 0xc421af6220] [0xc421af61f8 0xc421af6210] [0x916090 0x916190] 0xc4215e0060 }: error: cannot rollback a paused deployment config Jul 9 19:12:41.132: INFO: Latest rollout of dc/paused (rc/paused-1) is complete. STEP: making sure it updates observedGeneration after being paused [AfterEach] paused [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:738 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:12:43.581: INFO: namespace : e2e-test-cli-deployment-lzhmc api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:23.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:51.262 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 paused [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:736 should disable actions on deployments [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:742 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:09.625: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:11.246: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-containers-zdwzn STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test use defaults Jul 9 19:13:11.887: INFO: Waiting up to 5m0s for pod "client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-containers-zdwzn" to be "success or failure" Jul 9 19:13:11.916: INFO: Pod "client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.099543ms Jul 9 19:13:13.990: INFO: Pod "client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103415267s STEP: Saw pod success Jul 9 19:13:13.990: INFO: Pod "client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:13:14.019: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:13:14.135: INFO: Waiting for pod client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:13:14.172: INFO: Pod client-containers-cbb9f2ce-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:14.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-zdwzn" for this suite. Jul 9 19:13:20.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:23.147: INFO: namespace: e2e-tests-containers-zdwzn, resource: bindings, ignored listing per whitelist Jul 9 19:13:23.659: INFO: namespace e2e-tests-containers-zdwzn deletion completed in 9.455658364s • [SLOW TEST:14.034 seconds] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should use the image defaults if command and args are blank [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:11:52.157: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:11:53.896: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-kqpsc STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:11:54.615: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node STEP: Creating secret with name s-test-opt-del-9db12ddf-83e6-11e8-8fe2-28d244b00276 STEP: Creating secret with name s-test-opt-upd-9db12e14-83e6-11e8-8fe2-28d244b00276 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9db12ddf-83e6-11e8-8fe2-28d244b00276 STEP: Updating secret s-test-opt-upd-9db12e14-83e6-11e8-8fe2-28d244b00276 STEP: Creating secret with name s-test-opt-create-9db12e26-83e6-11e8-8fe2-28d244b00276 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:05.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kqpsc" for this suite. Jul 9 19:13:27.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:29.000: INFO: namespace: e2e-tests-secrets-kqpsc, resource: bindings, ignored listing per whitelist Jul 9 19:13:31.240: INFO: namespace e2e-tests-secrets-kqpsc deletion completed in 25.968448725s • [SLOW TEST:99.083 seconds] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:17.540: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:20.288: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-var-expansion-bzdhq STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test env composition Jul 9 19:13:21.123: INFO: Waiting up to 5m0s for pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276" in namespace "e2e-tests-var-expansion-bzdhq" to be "success or failure" Jul 9 19:13:21.168: INFO: Pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 45.213081ms Jul 9 19:13:23.210: INFO: Pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086529997s Jul 9 19:13:25.269: INFO: Pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146297789s STEP: Saw pod success Jul 9 19:13:25.270: INFO: Pod "var-expansion-d1387e25-83e6-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:13:25.313: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod var-expansion-d1387e25-83e6-11e8-881a-28d244b00276 container dapi-container: STEP: delete the pod Jul 9 19:13:25.417: INFO: Waiting for pod var-expansion-d1387e25-83e6-11e8-881a-28d244b00276 to disappear Jul 9 19:13:25.458: INFO: Pod var-expansion-d1387e25-83e6-11e8-881a-28d244b00276 no longer exists [AfterEach] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-bzdhq" for this suite. Jul 9 19:13:31.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:35.196: INFO: namespace: e2e-tests-var-expansion-bzdhq, resource: bindings, ignored listing per whitelist Jul 9 19:13:36.455: INFO: namespace e2e-tests-var-expansion-bzdhq deletion completed in 10.949058678s • [SLOW TEST:18.915 seconds] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should allow composing env vars into new env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:23.660: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:25.211: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-d6xjb STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating secret with name secret-test-map-d40d907f-83e6-11e8-8401-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:13:25.896: INFO: Waiting up to 5m0s for pod "pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-secrets-d6xjb" to be "success or failure" Jul 9 19:13:25.928: INFO: Pod "pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.358634ms Jul 9 19:13:27.974: INFO: Pod "pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077946634s STEP: Saw pod success Jul 9 19:13:27.974: INFO: Pod "pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:13:28.002: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276 container secret-volume-test: STEP: delete the pod Jul 9 19:13:28.077: INFO: Waiting for pod pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:13:28.104: INFO: Pod pod-secrets-d41333e3-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:28.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-d6xjb" for this suite. Jul 9 19:13:34.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:37.397: INFO: namespace: e2e-tests-secrets-d6xjb, resource: bindings, ignored listing per whitelist Jul 9 19:13:37.621: INFO: namespace e2e-tests-secrets-d6xjb deletion completed in 9.480839271s • [SLOW TEST:13.961 seconds] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-api-machinery] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:23.646: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:25.194: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-jvv2t STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward api env vars Jul 9 19:13:25.945: INFO: Waiting up to 5m0s for pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276" in namespace "e2e-tests-downward-api-jvv2t" to be "success or failure" Jul 9 19:13:25.974: INFO: Pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.397911ms Jul 9 19:13:28.002: INFO: Pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057000308s Jul 9 19:13:30.036: INFO: Pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091581003s STEP: Saw pod success Jul 9 19:13:30.036: INFO: Pod "downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:13:30.068: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276 container dapi-container: STEP: delete the pod Jul 9 19:13:30.149: INFO: Waiting for pod downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276 to disappear Jul 9 19:13:30.177: INFO: Pod downward-api-d41a6e73-83e6-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:30.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jvv2t" for this suite. Jul 9 19:13:36.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:39.008: INFO: namespace: e2e-tests-downward-api-jvv2t, resource: bindings, ignored listing per whitelist Jul 9 19:13:39.763: INFO: namespace e2e-tests-downward-api-jvv2t deletion completed in 9.552506513s • [SLOW TEST:16.117 seconds] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:37 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should project all components that make up the projection API [Projection] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:31.244: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:32.967: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-c2mgg STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should project all components that make up the projection API [Projection] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name configmap-projected-all-test-volume-d8b8fea4-83e6-11e8-8fe2-28d244b00276 STEP: Creating secret with name secret-projected-all-test-volume-d8b8fe90-83e6-11e8-8fe2-28d244b00276 STEP: Creating a pod to test Check all projections for projected volume plugin Jul 9 19:13:33.762: INFO: Waiting up to 5m0s for pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-c2mgg" to be "success or failure" Jul 9 19:13:33.794: INFO: Pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.255872ms Jul 9 19:13:35.830: INFO: Pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068069492s Jul 9 19:13:37.861: INFO: Pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099079179s STEP: Saw pod success Jul 9 19:13:37.862: INFO: Pod "projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:13:37.894: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276 container projected-all-volume-test: STEP: delete the pod Jul 9 19:13:37.981: INFO: Waiting for pod projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276 to disappear Jul 9 19:13:38.012: INFO: Pod projected-volume-d8b8fe5a-83e6-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:38.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c2mgg" for this suite. Jul 9 19:13:44.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:47.657: INFO: namespace: e2e-tests-projected-c2mgg, resource: bindings, ignored listing per whitelist Jul 9 19:13:47.785: INFO: namespace e2e-tests-projected-c2mgg deletion completed in 9.733539364s • [SLOW TEST:16.541 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should project all components that make up the projection API [Projection] [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:36.458: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:38.586: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-8rb45 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 9 19:13:39.444: INFO: Waiting up to 5m0s for pod "pod-dc233d27-83e6-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-8rb45" to be "success or failure" Jul 9 19:13:39.505: INFO: Pod "pod-dc233d27-83e6-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 60.863579ms Jul 9 19:13:41.548: INFO: Pod "pod-dc233d27-83e6-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103172176s STEP: Saw pod success Jul 9 19:13:41.548: INFO: Pod "pod-dc233d27-83e6-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:13:41.593: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-dc233d27-83e6-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:13:41.685: INFO: Waiting for pod pod-dc233d27-83e6-11e8-881a-28d244b00276 to disappear Jul 9 19:13:41.727: INFO: Pod pod-dc233d27-83e6-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:41.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8rb45" for this suite. Jul 9 19:13:47.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:52.792: INFO: namespace: e2e-tests-emptydir-8rb45, resource: bindings, ignored listing per whitelist Jul 9 19:13:52.924: INFO: namespace e2e-tests-emptydir-8rb45 deletion completed in 11.151615087s • [SLOW TEST:16.467 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:00.523: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:02.562: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pod-network-test-gwdcd STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gwdcd STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 9 19:13:03.346: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 9 19:13:25.983: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.2.2.248:8080/dial?request=hostName&protocol=udp&host=10.2.2.243&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gwdcd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:13:25.983: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:13:26.338: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:26.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-gwdcd" for this suite. Jul 9 19:13:48.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:13:52.820: INFO: namespace: e2e-tests-pod-network-test-gwdcd, resource: bindings, ignored listing per whitelist Jul 9 19:13:53.066: INFO: namespace e2e-tests-pod-network-test-gwdcd deletion completed in 26.668327647s • [SLOW TEST:52.543 seconds] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431 Jul 9 19:13:53.079: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:53.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430 should support allow-all policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:245 Jul 9 19:13:53.079: This plugin does not implement NetworkPolicy. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs should respect image stream tag reference policy [Conformance] resolve the image pull spec [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:272 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:47.786: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:13:49.617: INFO: configPath is now "/tmp/e2e-test-cli-deployment-vz7zx-user.kubeconfig" Jul 9 19:13:49.618: INFO: The user is now "e2e-test-cli-deployment-vz7zx-user" Jul 9 19:13:49.618: INFO: Creating project "e2e-test-cli-deployment-vz7zx" Jul 9 19:13:49.769: INFO: Waiting on permissions in project "e2e-test-cli-deployment-vz7zx" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] resolve the image pull spec [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:272 Jul 9 19:13:49.858: INFO: Running 'oc create --config=/tmp/e2e-test-cli-deployment-vz7zx-user.kubeconfig --namespace=e2e-test-cli-deployment-vz7zx -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/deployments/deployment-image-resolution-is.yaml' imagestream.image.openshift.io "deployment-image-resolution" created [AfterEach] should respect image stream tag reference policy [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:268 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:13:54.625: INFO: namespace : e2e-test-cli-deployment-vz7zx api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:14:00.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:12.916 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 should respect image stream tag reference policy [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:266 resolve the image pull spec [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:272 ------------------------------ [k8s.io] Pods should be submitted and removed [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:39.764: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:41.217: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-jq7dk STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should be submitted and removed [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jul 9 19:13:44.065: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-dd98635e-83e6-11e8-bd2e-28d244b00276", GenerateName:"", Namespace:"e2e-tests-pods-jq7dk", SelfLink:"/api/v1/namespaces/e2e-tests-pods-jq7dk/pods/pod-submit-remove-dd98635e-83e6-11e8-bd2e-28d244b00276", UID:"ddae555c-83e6-11e8-84c6-0af96768d57e", ResourceVersion:"73676", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666785621, loc:(*time.Location)(0x6b11480)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"826854293", "name":"foo"}, Annotations:map[string]string{"openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ttjmn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc421245a00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"k8s.gcr.io/nginx-slim-amd64:0.20", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ttjmn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc421245a80), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc421034538), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-10-0-130-54.us-west-2.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc421245d40), ImagePullSecrets:[]v1.LocalObjectReference{v1.LocalObjectReference{Name:"default-dockercfg-nrch4"}}, Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785621, loc:(*time.Location)(0x6b11480)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785623, loc:(*time.Location)(0x6b11480)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666785621, loc:(*time.Location)(0x6b11480)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.130.54", PodIP:"10.2.2.9", StartTime:(*v1.Time)(0xc421962340), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc421962360), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/nginx-slim-amd64:0.20", ImageID:"docker-pullable://k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b", ContainerID:"docker://a721f4ab308b2de0f53691e6f5d4ef5382b19d4044df015d2b6a98b0ecb64ed2"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:13:53.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jq7dk" for this suite. Jul 9 19:13:59.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:14:02.828: INFO: namespace: e2e-tests-pods-jq7dk, resource: bindings, ignored listing per whitelist Jul 9 19:14:03.321: INFO: namespace e2e-tests-pods-jq7dk deletion completed in 10.074916195s • [SLOW TEST:23.557 seconds] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should be submitted and removed [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified new files should be created with FSGroup ownership when container is non-root [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:49 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:14:00.704: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:14:02.510: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-dt8nh STEP: Waiting for a default service account to be provisioned in namespace [It] new files should be created with FSGroup ownership when container is non-root [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:49 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 9 19:14:03.362: INFO: Waiting up to 5m0s for pod "pod-ea682ef8-83e6-11e8-8fe2-28d244b00276" in namespace "e2e-tests-emptydir-dt8nh" to be "success or failure" Jul 9 19:14:03.404: INFO: Pod "pod-ea682ef8-83e6-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 41.886007ms Jul 9 19:14:05.443: INFO: Pod "pod-ea682ef8-83e6-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.081038301s STEP: Saw pod success Jul 9 19:14:05.443: INFO: Pod "pod-ea682ef8-83e6-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:14:05.488: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-ea682ef8-83e6-11e8-8fe2-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:14:05.567: INFO: Waiting for pod pod-ea682ef8-83e6-11e8-8fe2-28d244b00276 to disappear Jul 9 19:14:05.599: INFO: Pod pod-ea682ef8-83e6-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:14:05.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dt8nh" for this suite. Jul 9 19:14:11.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:14:14.948: INFO: namespace: e2e-tests-emptydir-dt8nh, resource: bindings, ignored listing per whitelist Jul 9 19:14:15.767: INFO: namespace e2e-tests-emptydir-dt8nh deletion completed in 10.11986951s • [SLOW TEST:15.063 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 new files should be created with FSGroup ownership when container is non-root [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:49 ------------------------------ [Feature:AnnotationTrigger] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/trigger/annotation.go:29 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:AnnotationTrigger] Annotation trigger /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:37.623: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:AnnotationTrigger] Annotation trigger /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:13:39.183: INFO: configPath is now "/tmp/e2e-test-cli-deployment-2hv6q-user.kubeconfig" Jul 9 19:13:39.183: INFO: The user is now "e2e-test-cli-deployment-2hv6q-user" Jul 9 19:13:39.183: INFO: Creating project "e2e-test-cli-deployment-2hv6q" Jul 9 19:13:39.325: INFO: Waiting on permissions in project "e2e-test-cli-deployment-2hv6q" ... [It] reconciles after the image is overwritten [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/trigger/annotation.go:29 STEP: creating a Deployment STEP: tagging the docker.io/library/centos:latest as test:v1 image to create ImageStream Jul 9 19:13:39.422: INFO: Running 'oc tag --config=/tmp/e2e-test-cli-deployment-2hv6q-user.kubeconfig --namespace=e2e-test-cli-deployment-2hv6q docker.io/library/centos:latest test:v1' Jul 9 19:13:39.670: INFO: Tag test:v1 set to docker.io/library/centos:latest. STEP: waiting for the initial image to be replaced from ImageStream STEP: setting Deployment image repeatedly to ' ' to fight with annotation trigger STEP: waiting for the image to be injected by annotation trigger [AfterEach] [Feature:AnnotationTrigger] Annotation trigger /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:13:43.532: INFO: namespace : e2e-test-cli-deployment-2hv6q api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:AnnotationTrigger] Annotation trigger /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:14:25.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:47.973 seconds] [Feature:AnnotationTrigger] Annotation trigger /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/trigger/annotation.go:20 reconciles after the image is overwritten [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/trigger/annotation.go:29 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:100 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:14:25.600: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:14:27.153: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-hhn2q STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:100 STEP: Creating a pod to test downward API volume plugin Jul 9 19:14:27.731: INFO: Waiting up to 5m0s for pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276" in namespace "e2e-tests-downward-api-hhn2q" to be "success or failure" Jul 9 19:14:27.776: INFO: Pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 45.181634ms Jul 9 19:14:29.816: INFO: Pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085654956s Jul 9 19:14:31.847: INFO: Pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11590508s STEP: Saw pod success Jul 9 19:14:31.847: INFO: Pod "metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:14:31.881: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:14:31.949: INFO: Waiting for pod metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276 to disappear Jul 9 19:14:31.981: INFO: Pod metadata-volume-f8eebb12-83e6-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:14:31.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hhn2q" for this suite. Jul 9 19:14:38.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:14:39.634: INFO: namespace: e2e-tests-downward-api-hhn2q, resource: bindings, ignored listing per whitelist Jul 9 19:14:41.402: INFO: namespace e2e-tests-downward-api-hhn2q deletion completed in 9.382069444s • [SLOW TEST:15.803 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:100 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:53.082: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:13:55.129: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pod-network-test-psv7q STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-psv7q STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 9 19:13:55.838: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 9 19:14:16.629: INFO: ExecWithOptions {Command:[/bin/sh -c timeout -t 15 curl -g -q -s --connect-timeout 1 http://10.2.2.11:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-psv7q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:14:16.629: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:14:16.952: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:14:16.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-psv7q" for this suite. Jul 9 19:14:39.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:14:41.446: INFO: namespace: e2e-tests-pod-network-test-psv7q, resource: bindings, ignored listing per whitelist Jul 9 19:14:43.230: INFO: namespace e2e-tests-pod-network-test-psv7q deletion completed in 26.238510831s • [SLOW TEST:50.148 seconds] [sig-network] Networking /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with multiple image change triggers [Conformance] should run a successful deployment with a trigger used by different containers [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:522 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:14:15.770: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:14:17.604: INFO: configPath is now "/tmp/e2e-test-cli-deployment-42zb4-user.kubeconfig" Jul 9 19:14:17.604: INFO: The user is now "e2e-test-cli-deployment-42zb4-user" Jul 9 19:14:17.604: INFO: Creating project "e2e-test-cli-deployment-42zb4" Jul 9 19:14:17.731: INFO: Waiting on permissions in project "e2e-test-cli-deployment-42zb4" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should run a successful deployment with a trigger used by different containers [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:522 STEP: verifying the deployment is marked complete Jul 9 19:14:25.461: INFO: Latest rollout of dc/example (rc/example-1) is complete. [AfterEach] with multiple image change triggers [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:509 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:14:27.527: INFO: namespace : e2e-test-cli-deployment-42zb4 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:15:07.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:51.836 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with multiple image change triggers [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:507 should run a successful deployment with a trigger used by different containers [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:522 ------------------------------ [Feature:Builds][Conformance] oc new-app should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:49 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:14:03.322: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:14:05.052: INFO: configPath is now "/tmp/e2e-test-new-app-xn8nh-user.kubeconfig" Jul 9 19:14:05.052: INFO: The user is now "e2e-test-new-app-xn8nh-user" Jul 9 19:14:05.052: INFO: Creating project "e2e-test-new-app-xn8nh" Jul 9 19:14:05.274: INFO: Waiting on permissions in project "e2e-test-new-app-xn8nh" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:26 Jul 9 19:14:05.347: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:30 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:14:05.505: INFO: Running scan #0 Jul 9 19:14:05.505: INFO: Checking language ruby Jul 9 19:14:05.552: INFO: Checking tag 2.0 Jul 9 19:14:05.552: INFO: Checking tag 2.2 Jul 9 19:14:05.552: INFO: Checking tag 2.3 Jul 9 19:14:05.552: INFO: Checking tag 2.4 Jul 9 19:14:05.552: INFO: Checking tag 2.5 Jul 9 19:14:05.552: INFO: Checking tag latest Jul 9 19:14:05.552: INFO: Checking language nodejs Jul 9 19:14:05.596: INFO: Checking tag 0.10 Jul 9 19:14:05.596: INFO: Checking tag 4 Jul 9 19:14:05.596: INFO: Checking tag 6 Jul 9 19:14:05.596: INFO: Checking tag 8 Jul 9 19:14:05.596: INFO: Checking tag latest Jul 9 19:14:05.596: INFO: Checking language perl Jul 9 19:14:05.647: INFO: Checking tag 5.20 Jul 9 19:14:05.647: INFO: Checking tag 5.24 Jul 9 19:14:05.647: INFO: Checking tag latest Jul 9 19:14:05.647: INFO: Checking tag 5.16 Jul 9 19:14:05.647: INFO: Checking language php Jul 9 19:14:05.689: INFO: Checking tag 7.1 Jul 9 19:14:05.689: INFO: Checking tag latest Jul 9 19:14:05.689: INFO: Checking tag 5.5 Jul 9 19:14:05.689: INFO: Checking tag 5.6 Jul 9 19:14:05.689: INFO: Checking tag 7.0 Jul 9 19:14:05.689: INFO: Checking language python Jul 9 19:14:05.740: INFO: Checking tag latest Jul 9 19:14:05.740: INFO: Checking tag 2.7 Jul 9 19:14:05.740: INFO: Checking tag 3.3 Jul 9 19:14:05.740: INFO: Checking tag 3.4 Jul 9 19:14:05.740: INFO: Checking tag 3.5 Jul 9 19:14:05.740: INFO: Checking tag 3.6 Jul 9 19:14:05.740: INFO: Checking language wildfly Jul 9 19:14:05.783: INFO: Checking tag latest Jul 9 19:14:05.783: INFO: Checking tag 10.0 Jul 9 19:14:05.783: INFO: Checking tag 10.1 Jul 9 19:14:05.783: INFO: Checking tag 11.0 Jul 9 19:14:05.783: INFO: Checking tag 12.0 Jul 9 19:14:05.783: INFO: Checking tag 8.1 Jul 9 19:14:05.783: INFO: Checking tag 9.0 Jul 9 19:14:05.783: INFO: Checking language mysql Jul 9 19:14:05.830: INFO: Checking tag 5.5 Jul 9 19:14:05.830: INFO: Checking tag 5.6 Jul 9 19:14:05.830: INFO: Checking tag 5.7 Jul 9 19:14:05.830: INFO: Checking tag latest Jul 9 19:14:05.830: INFO: Checking language postgresql Jul 9 19:14:05.879: INFO: Checking tag 9.4 Jul 9 19:14:05.879: INFO: Checking tag 9.5 Jul 9 19:14:05.879: INFO: Checking tag 9.6 Jul 9 19:14:05.879: INFO: Checking tag latest Jul 9 19:14:05.879: INFO: Checking tag 9.2 Jul 9 19:14:05.879: INFO: Checking language mongodb Jul 9 19:14:05.930: INFO: Checking tag 2.6 Jul 9 19:14:05.931: INFO: Checking tag 3.2 Jul 9 19:14:05.931: INFO: Checking tag 3.4 Jul 9 19:14:05.931: INFO: Checking tag latest Jul 9 19:14:05.931: INFO: Checking tag 2.4 Jul 9 19:14:05.931: INFO: Checking language jenkins Jul 9 19:14:05.973: INFO: Checking tag 1 Jul 9 19:14:05.973: INFO: Checking tag 2 Jul 9 19:14:05.973: INFO: Checking tag latest Jul 9 19:14:05.973: INFO: Success! [It] should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:49 STEP: calling oc new-app Jul 9 19:14:05.973: INFO: Running 'oc new-app --config=/tmp/e2e-test-new-app-xn8nh-user.kubeconfig --namespace=e2e-test-new-app-xn8nh https://github.com/openshift/nodejs-ex --name a234567890123456789012345678901234567890123456789012345678' --> Found image 5c36a77 (2 weeks old) in image stream "openshift/nodejs" under tag "8" for "nodejs" Node.js 8 --------- Node.js 8 available as container is a base platform for building and running various Node.js 8 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Tags: builder, nodejs, nodejs8 * The source repository appears to match: nodejs * A source build using source code from https://github.com/openshift/nodejs-ex will be created * The resulting image will be pushed to image stream "a234567890123456789012345678901234567890123456789012345678:latest" * Use 'start-build' to trigger a new build * This image will be deployed in deployment config "a234567890123456789012345678901234567890123456789012345678" * Port 8080/tcp will be load balanced by service "a234567890123456789012345678901234567890123456789012345678" * Other containers can access this service through the hostname "a234567890123456789012345678901234567890123456789012345678" --> Creating resources ... imagestream "a234567890123456789012345678901234567890123456789012345678" created buildconfig "a234567890123456789012345678901234567890123456789012345678" created deploymentconfig "a234567890123456789012345678901234567890123456789012345678" created service "a234567890123456789012345678901234567890123456789012345678" created --> Success Build scheduled, use 'oc logs -f bc/a234567890123456789012345678901234567890123456789012345678' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/a234567890123456789012345678901234567890123456789012345678' Run 'oc status' to view your app. STEP: waiting for the build to complete STEP: waiting for the deployment to complete Jul 9 19:14:45.592: INFO: waiting for deploymentconfig e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678 to be available with version 1 Jul 9 19:14:49.680: INFO: deploymentconfig e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678 available after 4.08804044s pods: a23456789012345678901234567890123456789012345678901234567895p48 [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:40 [AfterEach] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:14:49.742: INFO: namespace : e2e-test-new-app-xn8nh api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:15:11.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:68.485 seconds] [Feature:Builds][Conformance] oc new-app /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:16 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:24 should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:49 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs when tagging images [Conformance] should successfully tag the deployed image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:441 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:14:43.232: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:14:45.156: INFO: configPath is now "/tmp/e2e-test-cli-deployment-j6sbn-user.kubeconfig" Jul 9 19:14:45.156: INFO: The user is now "e2e-test-cli-deployment-j6sbn-user" Jul 9 19:14:45.156: INFO: Creating project "e2e-test-cli-deployment-j6sbn" Jul 9 19:14:45.297: INFO: Waiting on permissions in project "e2e-test-cli-deployment-j6sbn" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should successfully tag the deployed image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:441 STEP: creating the deployment config fixture STEP: verifying the deployment is marked complete Jul 9 19:14:54.877: INFO: Latest rollout of dc/tag-images (rc/tag-images-1) is complete. STEP: verifying the deployer service account can update imagestreamtags and user can get them STEP: verifying the post deployment action happened: tag is set [AfterEach] when tagging images [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:437 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:14:57.102: INFO: namespace : e2e-test-cli-deployment-j6sbn api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:15:19.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:35.977 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 when tagging images [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:435 should successfully tag the deployed image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:441 ------------------------------ S ------------------------------ [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables should fail resolving unresolvable valueFrom in sti build environment variable references [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:105 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:15:07.607: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:15:09.399: INFO: configPath is now "/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig" Jul 9 19:15:09.399: INFO: The user is now "e2e-test-build-valuefrom-6mbp7-user" Jul 9 19:15:09.399: INFO: Creating project "e2e-test-build-valuefrom-6mbp7" Jul 9 19:15:09.541: INFO: Waiting on permissions in project "e2e-test-build-valuefrom-6mbp7" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:27 Jul 9 19:15:09.595: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:38 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:15:09.731: INFO: Running scan #0 Jul 9 19:15:09.731: INFO: Checking language ruby Jul 9 19:15:09.768: INFO: Checking tag 2.0 Jul 9 19:15:09.768: INFO: Checking tag 2.2 Jul 9 19:15:09.768: INFO: Checking tag 2.3 Jul 9 19:15:09.768: INFO: Checking tag 2.4 Jul 9 19:15:09.768: INFO: Checking tag 2.5 Jul 9 19:15:09.768: INFO: Checking tag latest Jul 9 19:15:09.768: INFO: Checking language nodejs Jul 9 19:15:09.812: INFO: Checking tag 0.10 Jul 9 19:15:09.812: INFO: Checking tag 4 Jul 9 19:15:09.812: INFO: Checking tag 6 Jul 9 19:15:09.812: INFO: Checking tag 8 Jul 9 19:15:09.812: INFO: Checking tag latest Jul 9 19:15:09.812: INFO: Checking language perl Jul 9 19:15:09.850: INFO: Checking tag 5.16 Jul 9 19:15:09.850: INFO: Checking tag 5.20 Jul 9 19:15:09.850: INFO: Checking tag 5.24 Jul 9 19:15:09.850: INFO: Checking tag latest Jul 9 19:15:09.850: INFO: Checking language php Jul 9 19:15:09.892: INFO: Checking tag latest Jul 9 19:15:09.892: INFO: Checking tag 5.5 Jul 9 19:15:09.892: INFO: Checking tag 5.6 Jul 9 19:15:09.892: INFO: Checking tag 7.0 Jul 9 19:15:09.892: INFO: Checking tag 7.1 Jul 9 19:15:09.892: INFO: Checking language python Jul 9 19:15:09.928: INFO: Checking tag 2.7 Jul 9 19:15:09.928: INFO: Checking tag 3.3 Jul 9 19:15:09.928: INFO: Checking tag 3.4 Jul 9 19:15:09.928: INFO: Checking tag 3.5 Jul 9 19:15:09.928: INFO: Checking tag 3.6 Jul 9 19:15:09.928: INFO: Checking tag latest Jul 9 19:15:09.928: INFO: Checking language wildfly Jul 9 19:15:09.966: INFO: Checking tag latest Jul 9 19:15:09.966: INFO: Checking tag 10.0 Jul 9 19:15:09.966: INFO: Checking tag 10.1 Jul 9 19:15:09.966: INFO: Checking tag 11.0 Jul 9 19:15:09.966: INFO: Checking tag 12.0 Jul 9 19:15:09.966: INFO: Checking tag 8.1 Jul 9 19:15:09.966: INFO: Checking tag 9.0 Jul 9 19:15:09.966: INFO: Checking language mysql Jul 9 19:15:10.002: INFO: Checking tag 5.5 Jul 9 19:15:10.002: INFO: Checking tag 5.6 Jul 9 19:15:10.002: INFO: Checking tag 5.7 Jul 9 19:15:10.002: INFO: Checking tag latest Jul 9 19:15:10.002: INFO: Checking language postgresql Jul 9 19:15:10.041: INFO: Checking tag 9.5 Jul 9 19:15:10.041: INFO: Checking tag 9.6 Jul 9 19:15:10.041: INFO: Checking tag latest Jul 9 19:15:10.041: INFO: Checking tag 9.2 Jul 9 19:15:10.041: INFO: Checking tag 9.4 Jul 9 19:15:10.041: INFO: Checking language mongodb Jul 9 19:15:10.081: INFO: Checking tag 3.2 Jul 9 19:15:10.081: INFO: Checking tag 3.4 Jul 9 19:15:10.081: INFO: Checking tag latest Jul 9 19:15:10.081: INFO: Checking tag 2.4 Jul 9 19:15:10.081: INFO: Checking tag 2.6 Jul 9 19:15:10.081: INFO: Checking language jenkins Jul 9 19:15:10.115: INFO: Checking tag 1 Jul 9 19:15:10.115: INFO: Checking tag 2 Jul 9 19:15:10.115: INFO: Checking tag latest Jul 9 19:15:10.115: INFO: Success! STEP: creating test image stream Jul 9 19:15:10.115: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/valuefrom/test-is.json' imagestream.image.openshift.io "test" created STEP: creating test secret Jul 9 19:15:10.486: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/valuefrom/test-secret.yaml' secret "mysecret" created STEP: creating test configmap Jul 9 19:15:10.765: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/valuefrom/test-configmap.yaml' configmap "myconfigmap" created [It] should fail resolving unresolvable valueFrom in sti build environment variable references [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:105 STEP: creating test build config Jul 9 19:15:11.083: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/valuefrom/failed-sti-build-value-from-config.yaml' buildconfig.build.openshift.io "mys2itest" created STEP: starting test build Jul 9 19:15:11.382: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-valuefrom-6mbp7-user.kubeconfig --namespace=e2e-test-build-valuefrom-6mbp7 mys2itest -o=name' Jul 9 19:15:11.722: INFO: start-build output with args [mys2itest -o=name]: Error> StdOut> build/mys2itest-1 StdErr> Jul 9 19:15:11.723: INFO: Waiting for mys2itest-1 to complete Jul 9 19:15:17.798: INFO: WaitForABuild returning with error: The build "mys2itest-1" status is "Error" Jul 9 19:15:17.798: INFO: Done waiting for mys2itest-1: util.BuildResult{BuildPath:"build/mys2itest-1", BuildName:"mys2itest-1", StartBuildStdErr:"", StartBuildStdOut:"build/mys2itest-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420eb8f00), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42078c1e0)} with error: The build "mys2itest-1" status is "Error" [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:31 [AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:15:17.867: INFO: namespace : e2e-test-build-valuefrom-6mbp7 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:15:23.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:16.351 seconds] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:13 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:26 should fail resolving unresolvable valueFrom in sti build environment variable references [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:105 ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:169 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:15:23.960: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:15:25.857: INFO: configPath is now "/tmp/e2e-test-router-scoped-65jwn-user.kubeconfig" Jul 9 19:15:25.857: INFO: The user is now "e2e-test-router-scoped-65jwn-user" Jul 9 19:15:25.857: INFO: Creating project "e2e-test-router-scoped-65jwn" Jul 9 19:15:25.994: INFO: Waiting on permissions in project "e2e-test-router-scoped-65jwn" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:48 Jul 9 19:15:26.062: INFO: Running 'oc new-app --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-65jwn -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "e2e-test-router-scoped-65jwn/" for "/tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml" to project e2e-test-router-scoped-65jwn * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "router-scoped" created pod "router-override" created pod "router-override-domains" created rolebinding "system-router" created route "route-1" created route "route-2" created route "route-override-domain-1" created route "route-override-domain-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Access your application via route 'y.a.null.ptr' Access your application via route 'main.void.str' Run 'oc status' to view your app. [It] should override the route host for overridden domains with a custom value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:169 Jul 9 19:15:27.169: INFO: Creating new exec pod STEP: creating a scoped router with overridden domains from a config file "/tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Jul 9 19:15:34.315: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.2.2.35' "http://10.2.2.35:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jul 9 19:15:34.940: INFO: stderr: "" STEP: waiting for the valid route to respond Jul 9 19:15:34.940: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: route-override-domain-1-e2e-test-router-scoped-65jwn.apps.veto.test' "http://10.2.2.35/Letter" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jul 9 19:15:35.607: INFO: stderr: "" STEP: checking that the stored domain name does not match a route Jul 9 19:15:35.607: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: y.a.null.ptr' "http://10.2.2.35/Letter"' Jul 9 19:15:36.246: INFO: stderr: "" STEP: checking that route-override-domain-1-e2e-test-router-scoped-65jwn.apps.veto.test matches a route Jul 9 19:15:36.246: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-override-domain-1-e2e-test-router-scoped-65jwn.apps.veto.test' "http://10.2.2.35/Letter"' Jul 9 19:15:36.960: INFO: stderr: "" STEP: checking that route-override-domain-2-e2e-test-router-scoped-65jwn.apps.veto.test matches a route Jul 9 19:15:36.960: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-65jwn execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-override-domain-2-e2e-test-router-scoped-65jwn.apps.veto.test' "http://10.2.2.35/Letter"' Jul 9 19:15:37.613: INFO: stderr: "" STEP: checking that the router reported the correct ingress and override [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:15:37.766: INFO: namespace : e2e-test-router-scoped-65jwn api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:15:51.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:27.904 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:26 The HAProxy router /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:67 should override the route host for overridden domains with a custom value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:169 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs viewing rollout history [Conformance] should print the rollout history [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:602 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:14:41.404: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:14:43.030: INFO: configPath is now "/tmp/e2e-test-cli-deployment-bvczx-user.kubeconfig" Jul 9 19:14:43.030: INFO: The user is now "e2e-test-cli-deployment-bvczx-user" Jul 9 19:14:43.030: INFO: Creating project "e2e-test-cli-deployment-bvczx" Jul 9 19:14:43.210: INFO: Waiting on permissions in project "e2e-test-cli-deployment-bvczx" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should print the rollout history [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:602 STEP: waiting for the first rollout to complete Jul 9 19:14:57.434: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete. STEP: updating the deployment config in order to trigger a new rollout STEP: waiting for the second rollout to complete Jul 9 19:15:12.069: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-2) is complete. Jul 9 19:15:12.069: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-bvczx-user.kubeconfig --namespace=e2e-test-cli-deployment-bvczx history dc/deployment-simple' STEP: checking the history for substrings deploymentconfigs "deployment-simple" REVISION STATUS CAUSE 1 Complete config change 2 Complete config change [AfterEach] viewing rollout history [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:598 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:15:14.466: INFO: namespace : e2e-test-cli-deployment-bvczx api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:15:52.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:71.121 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 viewing rollout history [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:596 should print the rollout history [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:602 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:15:52.526: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:15:54.037: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-vj5sj STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 9 19:15:54.718: INFO: Waiting up to 5m0s for pod "pod-2cc7bf6f-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-emptydir-vj5sj" to be "success or failure" Jul 9 19:15:54.748: INFO: Pod "pod-2cc7bf6f-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.638ms Jul 9 19:15:56.780: INFO: Pod "pod-2cc7bf6f-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062550676s STEP: Saw pod success Jul 9 19:15:56.780: INFO: Pod "pod-2cc7bf6f-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:15:56.929: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-2cc7bf6f-83e7-11e8-8401-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:15:57.118: INFO: Waiting for pod pod-2cc7bf6f-83e7-11e8-8401-28d244b00276 to disappear Jul 9 19:15:57.146: INFO: Pod pod-2cc7bf6f-83e7-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:15:57.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vj5sj" for this suite. Jul 9 19:16:03.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:16:04.671: INFO: namespace: e2e-tests-emptydir-vj5sj, resource: bindings, ignored listing per whitelist Jul 9 19:16:06.533: INFO: namespace e2e-tests-emptydir-vj5sj deletion completed in 9.355363525s • [SLOW TEST:14.008 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431 Jul 9 19:16:06.536: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:06.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430 should enforce policy based on NamespaceSelector [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:282 Jul 9 19:16:06.536: This plugin does not implement NetworkPolicy. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:15:51.866: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:15:53.579: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-rj487 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:15:54.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-rj487" to be "success or failure" Jul 9 19:15:54.377: INFO: Pod "downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 74.260454ms Jul 9 19:15:56.621: INFO: Pod "downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317905061s STEP: Saw pod success Jul 9 19:15:56.621: INFO: Pod "downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:15:56.683: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:15:56.753: INFO: Waiting for pod downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:15:56.784: INFO: Pod downwardapi-volume-2c87287f-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:15:56.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rj487" for this suite. Jul 9 19:16:03.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:16:05.110: INFO: namespace: e2e-tests-downward-api-rj487, resource: bindings, ignored listing per whitelist Jul 9 19:16:06.679: INFO: namespace e2e-tests-downward-api-rj487 deletion completed in 9.741913411s • [SLOW TEST:14.813 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:06.538: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:16:08.064: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-pmhns STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name projected-configmap-test-volume-35243cd3-83e7-11e8-8401-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:16:08.783: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-pmhns" to be "success or failure" Jul 9 19:16:08.837: INFO: Pod "pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 53.581543ms Jul 9 19:16:10.865: INFO: Pod "pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.082002351s STEP: Saw pod success Jul 9 19:16:10.865: INFO: Pod "pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:16:10.900: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:16:10.967: INFO: Waiting for pod pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276 to disappear Jul 9 19:16:10.996: INFO: Pod pod-projected-configmaps-352a5988-83e7-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:10.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pmhns" for this suite. Jul 9 19:16:17.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:16:19.587: INFO: namespace: e2e-tests-projected-pmhns, resource: bindings, ignored listing per whitelist Jul 9 19:16:20.420: INFO: namespace e2e-tests-projected-pmhns deletion completed in 9.389700495s • [SLOW TEST:13.882 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:16:20.423: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:20.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:20.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should allow connections from pods in the default namespace to a service in another namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:60 Jul 9 19:16:20.423: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:06.680: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:16:08.428: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-pvdqq STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 9 19:16:09.175: INFO: Waiting up to 5m0s for pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-emptydir-pvdqq" to be "success or failure" Jul 9 19:16:09.206: INFO: Pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.751906ms Jul 9 19:16:11.255: INFO: Pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079832714s Jul 9 19:16:13.287: INFO: Pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111368217s STEP: Saw pod success Jul 9 19:16:13.287: INFO: Pod "pod-3564eb7c-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:16:13.322: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-3564eb7c-83e7-11e8-8fe2-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:16:13.405: INFO: Waiting for pod pod-3564eb7c-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:16:13.436: INFO: Pod pod-3564eb7c-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:13.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pvdqq" for this suite. Jul 9 19:16:19.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:16:21.562: INFO: namespace: e2e-tests-emptydir-pvdqq, resource: bindings, ignored listing per whitelist Jul 9 19:16:23.503: INFO: namespace e2e-tests-emptydir-pvdqq deletion completed in 10.027196829s • [SLOW TEST:16.823 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with enhanced status [Conformance] should include various info in status [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:539 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:15:19.211: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:15:21.149: INFO: configPath is now "/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig" Jul 9 19:15:21.149: INFO: The user is now "e2e-test-cli-deployment-25qpw-user" Jul 9 19:15:21.149: INFO: Creating project "e2e-test-cli-deployment-25qpw" Jul 9 19:15:21.313: INFO: Waiting on permissions in project "e2e-test-cli-deployment-25qpw" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should include various info in status [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:539 STEP: verifying the deployment is marked complete Jul 9 19:15:34.015: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete. STEP: verifying that status.replicas is set Jul 9 19:15:34.015: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig --namespace=e2e-test-cli-deployment-25qpw dc/deployment-simple --output=jsonpath="{.status.replicas}"' STEP: verifying that status.updatedReplicas is set Jul 9 19:15:34.240: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig --namespace=e2e-test-cli-deployment-25qpw dc/deployment-simple --output=jsonpath="{.status.updatedReplicas}"' STEP: verifying that status.availableReplicas is set Jul 9 19:15:34.503: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig --namespace=e2e-test-cli-deployment-25qpw dc/deployment-simple --output=jsonpath="{.status.availableReplicas}"' STEP: verifying that status.unavailableReplicas is set Jul 9 19:15:34.763: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-25qpw-user.kubeconfig --namespace=e2e-test-cli-deployment-25qpw dc/deployment-simple --output=jsonpath="{.status.unavailableReplicas}"' [AfterEach] with enhanced status [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:535 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:15:37.115: INFO: namespace : e2e-test-cli-deployment-25qpw api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:29.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:70.063 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with enhanced status [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:532 should include various info in status [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:539 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:23.505: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:16:25.180: INFO: configPath is now "/tmp/e2e-test-router-stress-vn5rl-user.kubeconfig" Jul 9 19:16:25.180: INFO: The user is now "e2e-test-router-stress-vn5rl-user" Jul 9 19:16:25.180: INFO: Creating project "e2e-test-router-stress-vn5rl" Jul 9 19:16:25.318: INFO: Waiting on permissions in project "e2e-test-router-stress-vn5rl" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:16:25.456: INFO: namespace : e2e-test-router-stress-vn5rl api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:31.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [8.058 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86 converges when multiple routers are writing conflicting status [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:168 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:20.425: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:16:21.998: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-trb4p STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating secret with name secret-test-3d680430-83e7-11e8-8401-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:16:22.645: INFO: Waiting up to 5m0s for pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-secrets-trb4p" to be "success or failure" Jul 9 19:16:22.676: INFO: Pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.889356ms Jul 9 19:16:24.706: INFO: Pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061005967s Jul 9 19:16:26.735: INFO: Pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089959761s STEP: Saw pod success Jul 9 19:16:26.735: INFO: Pod "pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:16:26.775: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276 container secret-volume-test: STEP: delete the pod Jul 9 19:16:26.841: INFO: Waiting for pod pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276 to disappear Jul 9 19:16:26.872: INFO: Pod pod-secrets-3d6cff39-83e7-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:26.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-trb4p" for this suite. Jul 9 19:16:33.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:16:35.481: INFO: namespace: e2e-tests-secrets-trb4p, resource: bindings, ignored listing per whitelist Jul 9 19:16:36.427: INFO: namespace e2e-tests-secrets-trb4p deletion completed in 9.462154922s • [SLOW TEST:16.002 seconds] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [Feature:DeploymentConfig] deploymentconfigs rolled back [Conformance] should rollback to an older deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:842 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:15:11.809: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:15:13.424: INFO: configPath is now "/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig" Jul 9 19:15:13.424: INFO: The user is now "e2e-test-cli-deployment-fb2n9-user" Jul 9 19:15:13.424: INFO: Creating project "e2e-test-cli-deployment-fb2n9" Jul 9 19:15:13.552: INFO: Waiting on permissions in project "e2e-test-cli-deployment-fb2n9" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should rollback to an older deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:842 Jul 9 19:15:27.148: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete. Jul 9 19:15:27.148: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig --namespace=e2e-test-cli-deployment-fb2n9 latest deployment-simple' STEP: verifying that we are on the second version Jul 9 19:15:27.463: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig --namespace=e2e-test-cli-deployment-fb2n9 dc/deployment-simple --output=jsonpath="{.status.latestVersion}"' Jul 9 19:15:42.267: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-2) is complete. STEP: verifying that we can rollback Jul 9 19:15:42.267: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig --namespace=e2e-test-cli-deployment-fb2n9 undo dc/deployment-simple' Jul 9 19:15:56.225: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-3) is complete. STEP: verifying that we are on the third version Jul 9 19:15:56.225: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-fb2n9-user.kubeconfig --namespace=e2e-test-cli-deployment-fb2n9 dc/deployment-simple --output=jsonpath="{.status.latestVersion}"' [AfterEach] rolled back [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:838 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:15:58.788: INFO: namespace : e2e-test-cli-deployment-fb2n9 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:44.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:93.049 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 rolled back [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:836 should rollback to an older deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:842 ------------------------------ [sig-storage] Projected should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:36.429: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:16:37.910: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-78tdl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907 STEP: Creating a pod to test downward API volume plugin Jul 9 19:16:38.533: INFO: Waiting up to 5m0s for pod "metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-78tdl" to be "success or failure" Jul 9 19:16:38.566: INFO: Pod "metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.032762ms Jul 9 19:16:40.599: INFO: Pod "metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066337047s STEP: Saw pod success Jul 9 19:16:40.599: INFO: Pod "metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:16:40.628: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:16:40.698: INFO: Waiting for pod metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276 to disappear Jul 9 19:16:40.726: INFO: Pod metadata-volume-46e4ef9c-83e7-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:40.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-78tdl" for this suite. Jul 9 19:16:46.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:16:48.296: INFO: namespace: e2e-tests-projected-78tdl, resource: bindings, ignored listing per whitelist Jul 9 19:16:50.296: INFO: namespace e2e-tests-projected-78tdl deletion completed in 9.537525694s • [SLOW TEST:13.866 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:44.859: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:16:46.729: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-hostpath-67sp9 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should support existing directory subPath [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:121 Jul 9 19:16:47.360: INFO: No SSH Key for provider : 'GetSigner(...) not implemented for ' [AfterEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:47.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-67sp9" for this suite. Jul 9 19:16:53.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:16:55.401: INFO: namespace: e2e-tests-hostpath-67sp9, resource: bindings, ignored listing per whitelist Jul 9 19:16:57.014: INFO: namespace e2e-tests-hostpath-67sp9 deletion completed in 9.602960881s S [SKIPPING] [12.155 seconds] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should support existing directory subPath [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:121 Jul 9 19:16:47.360: No SSH Key for provider : 'GetSigner(...) not implemented for ' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [sig-storage] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:50.298: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:16:51.762: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-sz99l STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating projection with secret that has name projected-secret-test-4f2e0a0f-83e7-11e8-8401-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:16:52.463: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-sz99l" to be "success or failure" Jul 9 19:16:52.494: INFO: Pod "pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.28511ms Jul 9 19:16:54.554: INFO: Pod "pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.090533248s STEP: Saw pod success Jul 9 19:16:54.554: INFO: Pod "pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:16:54.607: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276 container projected-secret-volume-test: STEP: delete the pod Jul 9 19:16:54.693: INFO: Waiting for pod pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276 to disappear Jul 9 19:16:54.721: INFO: Pod pod-projected-secrets-4f330743-83e7-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:16:54.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sz99l" for this suite. Jul 9 19:17:00.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:17:03.092: INFO: namespace: e2e-tests-projected-sz99l, resource: bindings, ignored listing per whitelist Jul 9 19:17:05.032: INFO: namespace e2e-tests-projected-sz99l deletion completed in 10.271422297s • [SLOW TEST:14.735 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:57.017: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:16:58.550: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-h5zmm STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name configmap-test-volume-5343002a-83e7-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:16:59.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-configmap-h5zmm" to be "success or failure" Jul 9 19:16:59.342: INFO: Pod "pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.851956ms Jul 9 19:17:01.372: INFO: Pod "pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063354557s STEP: Saw pod success Jul 9 19:17:01.372: INFO: Pod "pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:17:01.401: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:17:01.579: INFO: Waiting for pod pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276 to disappear Jul 9 19:17:01.636: INFO: Pod pod-configmaps-53479f06-83e7-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:17:01.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-h5zmm" for this suite. Jul 9 19:17:07.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:17:11.117: INFO: namespace: e2e-tests-configmap-h5zmm, resource: bindings, ignored listing per whitelist Jul 9 19:17:11.562: INFO: namespace e2e-tests-configmap-h5zmm deletion completed in 9.885698637s • [SLOW TEST:14.545 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [k8s.io] Sysctls should reject invalid sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:142 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:29.275: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:16:31.583: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-sysctl-vp8nj STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:56 [It] should reject invalid sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:142 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Collecting events from namespace "e2e-tests-sysctl-vp8nj". STEP: Found 1 events. Jul 9 19:16:32.407: INFO: At 2018-07-09 19:16:32 -0700 PDT - event for sysctl-433830bf-83e7-11e8-992b-28d244b00276: {default-scheduler } Scheduled: Successfully assigned e2e-tests-sysctl-vp8nj/sysctl-433830bf-83e7-11e8-992b-28d244b00276 to ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:16:32.747: INFO: POD NODE PHASE GRACE CONDITIONS Jul 9 19:16:32.747: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: deployment-simple-2-x8xwq ip-10-0-130-54.us-west-2.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:31 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:23 -0700 PDT ContainersNotReady containers with unready status: [myapp]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [myapp]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:31 -0700 PDT }] Jul 9 19:16:32.747: INFO: deployment-simple-3-htj8x ip-10-0-130-54.us-west-2.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:45 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:52 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:15:45 -0700 PDT }] Jul 9 19:16:32.747: INFO: execpod98j4h ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT }] Jul 9 19:16:32.747: INFO: frontend-1-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:27 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:50 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:21 -0700 PDT }] Jul 9 19:16:32.747: INFO: sysctl-433830bf-83e7-11e8-992b-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:32 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:32 -0700 PDT ContainersNotReady containers with unready status: [test-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:32 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:16:32.747: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:16:32.747: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:16:32.747: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }] Jul 9 19:16:32.747: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:16:32.747: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:16:32.747: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }] Jul 9 19:16:32.747: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:16:32.747: INFO: Jul 9 19:16:32.800: INFO: Logging node info for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:16:32.837: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:76056,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365150208 0} {} 8169092Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260292608 0} {} 8066692Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:16:26 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:bdaecec5adfa7c79e9525c0992fdab36c2d68066f5e91eff0d1d9e8d73c654ea gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8407119} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64@sha256:2edfad424a541b9e024f26368d3a5b7dcc1d7cd27a4ee8c1d8c3f81d9209ab2e gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6227659} {[openshift/hello-openshift@sha256:aaea76ff622d2f8bcb32e538e7b3cd0ef6d291953f3e7c9f556c1ba5baf47e2e openshift/hello-openshift:latest] 6089990}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:16:32.837: INFO: Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:16:32.875: INFO: Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:16:32.991: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:16:32.991: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:16:32.991: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:16:32.991: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:16:32.991: INFO: Container metrics-server ready: true, restart count 0 Jul 9 19:16:32.991: INFO: Container metrics-server-nanny ready: true, restart count 0 Jul 9 19:16:32.991: INFO: execpod98j4h started at 2018-07-09 19:14:15 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container exec ready: true, restart count 0 Jul 9 19:16:32.991: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container node-agent ready: true, restart count 3 Jul 9 19:16:32.991: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container directory-sync ready: true, restart count 0 Jul 9 19:16:32.991: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container webconsole ready: true, restart count 0 Jul 9 19:16:32.991: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container default-http-backend ready: true, restart count 0 Jul 9 19:16:32.991: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container router ready: true, restart count 0 Jul 9 19:16:32.991: INFO: deployment-simple-3-htj8x started at 2018-07-09 19:15:45 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container myapp ready: true, restart count 0 Jul 9 19:16:32.991: INFO: frontend-1-build started at 2018-07-09 19:14:21 -0700 PDT (2+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Init container git-clone ready: true, restart count 0 Jul 9 19:16:32.991: INFO: Init container manage-dockerfile ready: true, restart count 0 Jul 9 19:16:32.991: INFO: Container sti-build ready: false, restart count 0 Jul 9 19:16:32.991: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:16:32.991: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container registry ready: true, restart count 0 Jul 9 19:16:32.991: INFO: deployment-simple-2-x8xwq started at 2018-07-09 19:15:31 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.991: INFO: Container myapp ready: false, restart count 0 Jul 9 19:16:32.991: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded) Jul 9 19:16:32.991: INFO: Container alert-buffer ready: true, restart count 0 Jul 9 19:16:32.991: INFO: Container alertmanager ready: true, restart count 0 Jul 9 19:16:32.991: INFO: Container alertmanager-proxy ready: true, restart count 0 Jul 9 19:16:32.992: INFO: Container alerts-proxy ready: true, restart count 0 Jul 9 19:16:32.992: INFO: Container prom-proxy ready: true, restart count 0 Jul 9 19:16:32.992: INFO: Container prometheus ready: true, restart count 0 Jul 9 19:16:32.992: INFO: sysctl-433830bf-83e7-11e8-992b-28d244b00276 started at 2018-07-09 19:16:32 -0700 PDT (0+1 container statuses recorded) Jul 9 19:16:32.992: INFO: Container test-container ready: false, restart count 0 W0709 19:16:33.039551 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:16:33.169: INFO: Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:16:33.169: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.952369s} Jul 9 19:16:33.169: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.138495s} Jul 9 19:16:33.169: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:18.561353s} Jul 9 19:16:33.169: INFO: Logging node info for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:16:33.236: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:76069,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:16:30 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:16:33.236: INFO: Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:16:33.286: INFO: Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:17:03.379: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250) Jul 9 19:17:03.379: INFO: Logging node info for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:17:03.416: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:76531,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:16:59 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:17:03.417: INFO: Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:17:03.452: INFO: Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:17:03.641: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.641: INFO: Container kube-apiserver ready: true, restart count 4 Jul 9 19:17:03.641: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.641: INFO: Container tectonic-node-controller ready: true, restart count 0 Jul 9 19:17:03.641: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.641: INFO: Container tectonic-alm-operator ready: true, restart count 0 Jul 9 19:17:03.641: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.641: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0 Jul 9 19:17:03.641: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.641: INFO: Container node-agent ready: true, restart count 4 Jul 9 19:17:03.642: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container pod-checkpointer ready: true, restart count 0 Jul 9 19:17:03.642: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded) Jul 9 19:17:03.642: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:17:03.642: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:17:03.642: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container openshift-controller-manager ready: true, restart count 3 Jul 9 19:17:03.642: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container tectonic-node-controller-operator ready: true, restart count 0 Jul 9 19:17:03.642: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container kube-core-operator ready: true, restart count 0 Jul 9 19:17:03.642: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container tectonic-utility-operator ready: true, restart count 0 Jul 9 19:17:03.642: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container kube-addon-operator ready: true, restart count 0 Jul 9 19:17:03.642: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at (0+0 container statuses recorded) Jul 9 19:17:03.642: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container kube-controller-manager ready: true, restart count 1 Jul 9 19:17:03.642: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded) Jul 9 19:17:03.642: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0 Jul 9 19:17:03.642: INFO: Container tectonic-stats-emitter ready: true, restart count 0 Jul 9 19:17:03.642: INFO: Container tectonic-stats-extender ready: true, restart count 0 Jul 9 19:17:03.642: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container tectonic-channel-operator ready: true, restart count 0 Jul 9 19:17:03.642: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:17:03.642: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container openshift-apiserver ready: true, restart count 0 Jul 9 19:17:03.642: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container tectonic-network-operator ready: true, restart count 0 Jul 9 19:17:03.642: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded) Jul 9 19:17:03.642: INFO: Container dnsmasq ready: true, restart count 0 Jul 9 19:17:03.642: INFO: Container kubedns ready: true, restart count 0 Jul 9 19:17:03.642: INFO: Container sidecar ready: true, restart count 0 Jul 9 19:17:03.642: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container kube-scheduler ready: true, restart count 0 Jul 9 19:17:03.642: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:03.642: INFO: Container tectonic-clu ready: true, restart count 0 W0709 19:17:03.683471 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:17:03.841: INFO: Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal STEP: Dumping a list of prepulled images on each node... Jul 9 19:17:03.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sysctl-vp8nj" for this suite. Jul 9 19:17:10.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:17:13.714: INFO: namespace: e2e-tests-sysctl-vp8nj, resource: bindings, ignored listing per whitelist Jul 9 19:17:14.686: INFO: namespace e2e-tests-sysctl-vp8nj deletion completed in 10.762470175s • Failure [45.411 seconds] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should reject invalid sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:142 Expected : nil not to be nil /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:177 ------------------------------ S ------------------------------ [Conformance][templates] templateinstance impersonation tests should pass impersonation creation tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:231 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:05.033: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:17:06.734: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-user.kubeconfig" Jul 9 19:17:06.734: INFO: The user is now "e2e-test-templates-6rmnp-user" Jul 9 19:17:06.734: INFO: Creating project "e2e-test-templates-6rmnp" Jul 9 19:17:06.939: INFO: Waiting on permissions in project "e2e-test-templates-6rmnp" ... [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:57 Jul 9 19:17:08.098: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-adminuser.kubeconfig" Jul 9 19:17:08.432: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-impersonateuser.kubeconfig" Jul 9 19:17:08.778: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-impersonatebygroupuser.kubeconfig" Jul 9 19:17:09.017: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-edituser1.kubeconfig" Jul 9 19:17:09.331: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-edituser2.kubeconfig" Jul 9 19:17:09.592: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-viewuser.kubeconfig" Jul 9 19:17:09.870: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-impersonatebygroupuser.kubeconfig" [It] should pass impersonation creation tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:231 STEP: testing as system:admin user STEP: testing as e2e-test-templates-6rmnp-adminuser user Jul 9 19:17:10.199: INFO: configPath is now "/tmp/e2e-test-templates-6rmnp-adminuser.kubeconfig" [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:17:10.333: INFO: namespace : e2e-test-templates-6rmnp api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:17:16.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:221 • Failure [11.727 seconds] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:27 should pass impersonation creation tests [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:231 Expected an error to have occurred. Got: : nil /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:241 ------------------------------ [Conformance][templates] templateservicebroker security test should pass security tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:164 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateservicebroker security test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:16.761: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateservicebroker security test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:17:18.405: INFO: configPath is now "/tmp/e2e-test-templates-m7s7v-user.kubeconfig" Jul 9 19:17:18.405: INFO: The user is now "e2e-test-templates-m7s7v-user" Jul 9 19:17:18.405: INFO: Creating project "e2e-test-templates-m7s7v" Jul 9 19:17:18.548: INFO: Waiting on permissions in project "e2e-test-templates-m7s7v" ... [BeforeEach] [Conformance][templates] templateservicebroker security test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:45 [AfterEach] [Conformance][templates] templateservicebroker security test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:17:18.846: INFO: namespace : e2e-test-templates-m7s7v api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateservicebroker security test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:17:24.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][templates] templateservicebroker security test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:78 • Failure in Spec Setup (BeforeEach) [8.202 seconds] [Conformance][templates] templateservicebroker security test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:28 [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:150 should pass security tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:164 Expected error: <*errors.StatusError | 0xc42182d170>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "services \"apiserver\" not found", Reason: "NotFound", Details: {Name: "apiserver", Group: "", Kind: "services", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } services "apiserver" not found not to have occurred /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:52 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:14.689: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:17:16.748: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-var-expansion-cw78h STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test substitution in container's args Jul 9 19:17:17.550: INFO: Waiting up to 5m0s for pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276" in namespace "e2e-tests-var-expansion-cw78h" to be "success or failure" Jul 9 19:17:17.594: INFO: Pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 43.983466ms Jul 9 19:17:19.634: INFO: Pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083132661s Jul 9 19:17:21.681: INFO: Pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131014186s STEP: Saw pod success Jul 9 19:17:21.681: INFO: Pod "var-expansion-5e255d23-83e7-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:17:21.721: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 container dapi-container: STEP: delete the pod Jul 9 19:17:21.869: INFO: Waiting for pod var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 to disappear Jul 9 19:17:21.908: INFO: Pod var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 no longer exists [AfterEach] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:17:21.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-cw78h" for this suite. Jul 9 19:17:28.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:17:31.972: INFO: namespace: e2e-tests-var-expansion-cw78h, resource: bindings, ignored listing per whitelist Jul 9 19:17:32.438: INFO: namespace e2e-tests-var-expansion-cw78h deletion completed in 10.478758253s • [SLOW TEST:17.749 seconds] [k8s.io] Variable Expansion /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should allow substituting values in a container's args [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with minimum ready seconds set [Conformance] should not transition the deployment to Complete before satisfied [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1008 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:16:31.565: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:16:33.458: INFO: configPath is now "/tmp/e2e-test-cli-deployment-gtldl-user.kubeconfig" Jul 9 19:16:33.458: INFO: The user is now "e2e-test-cli-deployment-gtldl-user" Jul 9 19:16:33.458: INFO: Creating project "e2e-test-cli-deployment-gtldl" Jul 9 19:16:33.574: INFO: Waiting on permissions in project "e2e-test-cli-deployment-gtldl" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should not transition the deployment to Complete before satisfied [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1008 STEP: verifying the deployment is created STEP: verifying that all pods are ready Jul 9 19:16:37.910: INFO: All replicas are ready. STEP: verifying that the deployment is still running STEP: waiting for the deployment to finish Jul 9 19:17:38.001: INFO: Finished waiting for deployment. [AfterEach] with minimum ready seconds set [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1004 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:17:40.444: INFO: namespace : e2e-test-cli-deployment-gtldl api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:17:46.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:74.949 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with minimum ready seconds set [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1002 should not transition the deployment to Complete before satisfied [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1008 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:32.446: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:17:34.454: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-d7w4d STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 9 19:17:35.304: INFO: Waiting up to 5m0s for pod "pod-68bac026-83e7-11e8-992b-28d244b00276" in namespace "e2e-tests-emptydir-d7w4d" to be "success or failure" Jul 9 19:17:35.340: INFO: Pod "pod-68bac026-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 36.039848ms Jul 9 19:17:37.380: INFO: Pod "pod-68bac026-83e7-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076062691s STEP: Saw pod success Jul 9 19:17:37.380: INFO: Pod "pod-68bac026-83e7-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:17:37.422: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-68bac026-83e7-11e8-992b-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:17:37.505: INFO: Waiting for pod pod-68bac026-83e7-11e8-992b-28d244b00276 to disappear Jul 9 19:17:37.550: INFO: Pod pod-68bac026-83e7-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:17:37.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d7w4d" for this suite. Jul 9 19:17:43.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:17:46.551: INFO: namespace: e2e-tests-emptydir-d7w4d, resource: bindings, ignored listing per whitelist Jul 9 19:17:47.894: INFO: namespace e2e-tests-emptydir-d7w4d deletion completed in 10.300527545s • [SLOW TEST:15.449 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:46.516: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:17:48.338: INFO: configPath is now "/tmp/e2e-test-router-stress-lzm6c-user.kubeconfig" Jul 9 19:17:48.338: INFO: The user is now "e2e-test-router-stress-lzm6c-user" Jul 9 19:17:48.338: INFO: Creating project "e2e-test-router-stress-lzm6c" Jul 9 19:17:48.461: INFO: Waiting on permissions in project "e2e-test-router-stress-lzm6c" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:45 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:32 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:17:48.594: INFO: namespace : e2e-test-router-stress-lzm6c api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:17:54.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [8.153 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:21 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:68 should serve routes that were created from an ingress [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:79 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/router.go:48 ------------------------------ S ------------------------------ [k8s.io] Sysctls should support sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:60 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:11.563: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:17:13.354: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-sysctl-xg4rw STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:56 [It] should support sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:60 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Collecting events from namespace "e2e-tests-sysctl-xg4rw". STEP: Found 5 events. Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:14 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {default-scheduler } Scheduled: Successfully assigned e2e-tests-sysctl-xg4rw/sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276 to ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:14 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulling: pulling image "busybox" Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:16 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Successfully pulled image "busybox" Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:16 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:17:18.215: INFO: At 2018-07-09 19:17:16 -0700 PDT - event for sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:17:18.370: INFO: POD NODE PHASE GRACE CONDITIONS Jul 9 19:17:18.370: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:17:18.371: INFO: minreadytest-1-chctk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT }] Jul 9 19:17:18.371: INFO: minreadytest-1-deploy ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:34 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:34 -0700 PDT }] Jul 9 19:17:18.371: INFO: minreadytest-1-fddc7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:16:35 -0700 PDT }] Jul 9 19:17:18.371: INFO: execpod98j4h ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT }] Jul 9 19:17:18.371: INFO: frontend-1-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:27 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:50 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:21 -0700 PDT }] Jul 9 19:17:18.371: INFO: sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:14 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:14 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:14 -0700 PDT }] Jul 9 19:17:18.371: INFO: var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:17 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:17 -0700 PDT ContainersNotReady containers with unready status: [dapi-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [dapi-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:17 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:17:18.371: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:17:18.371: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:17:18.371: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:17:18.371: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:17:18.371: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:17:18.371: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }] Jul 9 19:17:18.372: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:17:18.372: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:17:18.372: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:17:18.372: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:17:18.372: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:17:18.372: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }] Jul 9 19:17:18.372: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:17:18.372: INFO: Jul 9 19:17:18.405: INFO: Logging node info for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:17:18.439: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:76832,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365150208 0} {} 8169092Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260292608 0} {} 8066692Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:17:16 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:bdaecec5adfa7c79e9525c0992fdab36c2d68066f5e91eff0d1d9e8d73c654ea gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8407119} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64@sha256:2edfad424a541b9e024f26368d3a5b7dcc1d7cd27a4ee8c1d8c3f81d9209ab2e gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6227659} {[openshift/hello-openshift@sha256:aaea76ff622d2f8bcb32e538e7b3cd0ef6d291953f3e7c9f556c1ba5baf47e2e openshift/hello-openshift:latest] 6089990}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:17:18.440: INFO: Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:17:18.472: INFO: Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:17:18.584: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container default-http-backend ready: true, restart count 0 Jul 9 19:17:18.584: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container router ready: true, restart count 0 Jul 9 19:17:18.584: INFO: minreadytest-1-chctk started at 2018-07-09 19:16:35 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container myapp ready: true, restart count 0 Jul 9 19:17:18.584: INFO: frontend-1-build started at 2018-07-09 19:14:21 -0700 PDT (2+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Init container git-clone ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Init container manage-dockerfile ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Container sti-build ready: false, restart count 0 Jul 9 19:17:18.584: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:17:18.584: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container registry ready: true, restart count 0 Jul 9 19:17:18.584: INFO: minreadytest-1-deploy started at 2018-07-09 19:16:34 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container deployment ready: true, restart count 0 Jul 9 19:17:18.584: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded) Jul 9 19:17:18.584: INFO: Container alert-buffer ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Container alertmanager ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Container alertmanager-proxy ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Container alerts-proxy ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Container prom-proxy ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Container prometheus ready: true, restart count 0 Jul 9 19:17:18.584: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:17:18.584: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:17:18.584: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:17:18.584: INFO: Container metrics-server ready: true, restart count 0 Jul 9 19:17:18.584: INFO: Container metrics-server-nanny ready: true, restart count 0 Jul 9 19:17:18.584: INFO: sysctl-5c0d9804-83e7-11e8-bd2e-28d244b00276 started at 2018-07-09 19:17:14 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container test-container ready: false, restart count 0 Jul 9 19:17:18.584: INFO: execpod98j4h started at 2018-07-09 19:14:15 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container exec ready: true, restart count 0 Jul 9 19:17:18.584: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container node-agent ready: true, restart count 3 Jul 9 19:17:18.584: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container directory-sync ready: true, restart count 0 Jul 9 19:17:18.584: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container webconsole ready: true, restart count 0 Jul 9 19:17:18.584: INFO: minreadytest-1-fddc7 started at 2018-07-09 19:16:35 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container myapp ready: true, restart count 0 Jul 9 19:17:18.584: INFO: var-expansion-5e255d23-83e7-11e8-992b-28d244b00276 started at 2018-07-09 19:17:17 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:18.584: INFO: Container dapi-container ready: false, restart count 0 W0709 19:17:18.623181 11748 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:17:18.703: INFO: Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:17:18.703: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.952369s} Jul 9 19:17:18.703: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.138495s} Jul 9 19:17:18.703: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:18.561353s} Jul 9 19:17:18.703: INFO: Logging node info for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:17:18.737: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:76717,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:17:10 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:17:18.737: INFO: Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:17:18.767: INFO: Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:17:48.799: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250) Jul 9 19:17:48.799: INFO: Logging node info for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:17:48.838: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:77099,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:17:39 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:17:48.838: INFO: Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:17:48.875: INFO: Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:17:49.001: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container tectonic-clu ready: true, restart count 0 Jul 9 19:17:49.001: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded) Jul 9 19:17:49.001: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0 Jul 9 19:17:49.001: INFO: Container tectonic-stats-emitter ready: true, restart count 0 Jul 9 19:17:49.001: INFO: Container tectonic-stats-extender ready: true, restart count 0 Jul 9 19:17:49.001: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container tectonic-channel-operator ready: true, restart count 0 Jul 9 19:17:49.001: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:17:49.001: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container openshift-apiserver ready: true, restart count 0 Jul 9 19:17:49.001: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container tectonic-network-operator ready: true, restart count 0 Jul 9 19:17:49.001: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded) Jul 9 19:17:49.001: INFO: Container dnsmasq ready: true, restart count 0 Jul 9 19:17:49.001: INFO: Container kubedns ready: true, restart count 0 Jul 9 19:17:49.001: INFO: Container sidecar ready: true, restart count 0 Jul 9 19:17:49.001: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container kube-scheduler ready: true, restart count 0 Jul 9 19:17:49.001: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container kube-apiserver ready: true, restart count 4 Jul 9 19:17:49.001: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container tectonic-node-controller ready: true, restart count 0 Jul 9 19:17:49.001: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container tectonic-alm-operator ready: true, restart count 0 Jul 9 19:17:49.001: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.001: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0 Jul 9 19:17:49.001: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.002: INFO: Container node-agent ready: true, restart count 4 Jul 9 19:17:49.002: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.002: INFO: Container pod-checkpointer ready: true, restart count 0 Jul 9 19:17:49.002: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded) Jul 9 19:17:49.002: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:17:49.002: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:17:49.002: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.002: INFO: Container openshift-controller-manager ready: true, restart count 3 Jul 9 19:17:49.002: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.002: INFO: Container tectonic-node-controller-operator ready: true, restart count 0 Jul 9 19:17:49.002: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.002: INFO: Container kube-core-operator ready: true, restart count 0 Jul 9 19:17:49.002: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.002: INFO: Container tectonic-utility-operator ready: true, restart count 0 Jul 9 19:17:49.002: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.002: INFO: Container kube-addon-operator ready: true, restart count 0 Jul 9 19:17:49.002: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at (0+0 container statuses recorded) Jul 9 19:17:49.002: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:17:49.002: INFO: Container kube-controller-manager ready: true, restart count 1 W0709 19:17:49.037552 11748 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:17:49.151: INFO: Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal STEP: Dumping a list of prepulled images on each node... Jul 9 19:17:49.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sysctl-xg4rw" for this suite. Jul 9 19:17:55.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:17:58.581: INFO: namespace: e2e-tests-sysctl-xg4rw, resource: bindings, ignored listing per whitelist Jul 9 19:17:59.396: INFO: namespace e2e-tests-sysctl-xg4rw deletion completed in 10.163694124s • Failure [47.833 seconds] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should support sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:60 Expected : kernel.shm_rmid_forced = 0 to contain substring : kernel.shm_rmid_forced = 1 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:98 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:47.900: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:17:49.996: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-gd54g STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:17:50.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276" in namespace "e2e-tests-downward-api-gd54g" to be "success or failure" Jul 9 19:17:50.785: INFO: Pod "downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.183911ms Jul 9 19:17:52.822: INFO: Pod "downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07606692s STEP: Saw pod success Jul 9 19:17:52.822: INFO: Pod "downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:17:52.862: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:17:52.960: INFO: Waiting for pod downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276 to disappear Jul 9 19:17:52.996: INFO: Pod downwardapi-volume-71eeb0f9-83e7-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:17:52.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gd54g" for this suite. Jul 9 19:17:59.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:18:01.251: INFO: namespace: e2e-tests-downward-api-gd54g, resource: bindings, ignored listing per whitelist Jul 9 19:18:03.494: INFO: namespace e2e-tests-downward-api-gd54g deletion completed in 10.449564971s • [SLOW TEST:15.594 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431 Jul 9 19:18:03.496: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:18:03.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430 should enforce multiple, stacked policies with overlapping podSelectors [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:177 Jul 9 19:18:03.496: This plugin does not implement NetworkPolicy. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:59.398: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:18:01.024: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-qlgsc STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating secret with name secret-test-7871b510-83e7-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:18:01.696: INFO: Waiting up to 5m0s for pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-secrets-qlgsc" to be "success or failure" Jul 9 19:18:01.727: INFO: Pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.880909ms Jul 9 19:18:03.756: INFO: Pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060054186s Jul 9 19:18:05.790: INFO: Pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09359757s STEP: Saw pod success Jul 9 19:18:05.790: INFO: Pod "pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:18:05.821: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276 container secret-volume-test: STEP: delete the pod Jul 9 19:18:05.887: INFO: Waiting for pod pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276 to disappear Jul 9 19:18:05.925: INFO: Pod pod-secrets-78768152-83e7-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:18:05.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qlgsc" for this suite. Jul 9 19:18:12.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:18:15.286: INFO: namespace: e2e-tests-secrets-qlgsc, resource: bindings, ignored listing per whitelist Jul 9 19:18:15.316: INFO: namespace e2e-tests-secrets-qlgsc deletion completed in 9.35852938s • [SLOW TEST:15.917 seconds] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:18:15.317: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:18:16.827: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-g87bq STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:18:17.558: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-downward-api-g87bq" to be "success or failure" Jul 9 19:18:17.591: INFO: Pod "downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.518953ms Jul 9 19:18:19.623: INFO: Pod "downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064857681s STEP: Saw pod success Jul 9 19:18:19.623: INFO: Pod "downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:18:19.654: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:18:19.721: INFO: Waiting for pod downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276 to disappear Jul 9 19:18:19.755: INFO: Pod downwardapi-volume-81ebe892-83e7-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:18:19.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g87bq" for this suite. Jul 9 19:18:25.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:18:28.874: INFO: namespace: e2e-tests-downward-api-g87bq, resource: bindings, ignored listing per whitelist Jul 9 19:18:29.199: INFO: namespace e2e-tests-downward-api-g87bq deletion completed in 9.409267552s • [SLOW TEST:13.882 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [image_ecosystem][mongodb] openshift mongodb image creating from a template should instantiate the template [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:34 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [image_ecosystem][mongodb] openshift mongodb image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:54.671: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [image_ecosystem][mongodb] openshift mongodb image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:17:56.971: INFO: configPath is now "/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig" Jul 9 19:17:56.971: INFO: The user is now "e2e-test-mongodb-create-mxp59-user" Jul 9 19:17:56.971: INFO: Creating project "e2e-test-mongodb-create-mxp59" Jul 9 19:17:57.125: INFO: Waiting on permissions in project "e2e-test-mongodb-create-mxp59" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:22 Jul 9 19:17:57.190: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [It] should instantiate the template [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:34 openshift namespace image streams OK STEP: creating a new app Jul 9 19:17:57.626: INFO: Running 'oc new-app --config=/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig --namespace=e2e-test-mongodb-create-mxp59 -f /tmp/fixture-testdata-dir333495585/examples/db-templates/mongodb-ephemeral-template.json' --> Deploying template "e2e-test-mongodb-create-mxp59/mongodb-ephemeral" for "/tmp/fixture-testdata-dir333495585/examples/db-templates/mongodb-ephemeral-template.json" to project e2e-test-mongodb-create-mxp59 MongoDB (Ephemeral) --------- MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing The following service(s) have been created in your project: mongodb. Username: userSH5 Password: rYXxAfAPgqyge1eS Database Name: sampledb Connection URL: mongodb://userSH5:rYXxAfAPgqyge1eS@mongodb/sampledb For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md. * With parameters: * Memory Limit=512Mi * Namespace=openshift * Database Service Name=mongodb * MongoDB Connection Username=userSH5 # generated * MongoDB Connection Password=rYXxAfAPgqyge1eS # generated * MongoDB Database Name=sampledb * MongoDB Admin Password=b2glYaLmorpxNjYS # generated * Version of MongoDB Image=3.2 --> Creating resources ... secret "mongodb" created service "mongodb" created deploymentconfig "mongodb" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/mongodb' Run 'oc status' to view your app. STEP: waiting for the deployment to complete Jul 9 19:17:59.934: INFO: waiting for deploymentconfig e2e-test-mongodb-create-mxp59/mongodb to be available with version 1 Jul 9 19:18:21.005: INFO: deploymentconfig e2e-test-mongodb-create-mxp59/mongodb available after 21.071302293s pods: mongodb-1-zkx79 STEP: expecting the mongodb pod is running STEP: expecting the mongodb service is answering for ping Jul 9 19:18:22.044: INFO: Running 'oc exec --config=/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig --namespace=e2e-test-mongodb-create-mxp59 mongodb-1-zkx79 -- bash -c mongo --quiet --eval '{"ping", 1}'' STEP: expecting that we can insert a new record Jul 9 19:18:22.755: INFO: Running 'oc exec --config=/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig --namespace=e2e-test-mongodb-create-mxp59 mongodb-1-zkx79 -- bash -c mongo --quiet "$MONGODB_DATABASE" --username "$MONGODB_USER" --password "$MONGODB_PASSWORD" --eval 'db.foo.save({ "status": "passed" })'' STEP: expecting that we can read a record Jul 9 19:18:23.426: INFO: Running 'oc exec --config=/tmp/e2e-test-mongodb-create-mxp59-user.kubeconfig --namespace=e2e-test-mongodb-create-mxp59 mongodb-1-zkx79 -- bash -c mongo --quiet "$MONGODB_DATABASE" --username "$MONGODB_USER" --password "$MONGODB_PASSWORD" --eval 'printjson(db.foo.find({}, {_id: 0}).toArray())'' [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:26 [AfterEach] [image_ecosystem][mongodb] openshift mongodb image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:18:24.226: INFO: namespace : e2e-test-mongodb-create-mxp59 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [image_ecosystem][mongodb] openshift mongodb image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:18:46.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:51.631 seconds] [image_ecosystem][mongodb] openshift mongodb image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:15 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:21 creating from a template /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:33 should instantiate the template [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_ephemeral.go:34 ------------------------------ [k8s.io] InitContainer should invoke init containers on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:18:29.200: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:18:30.691: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-init-container-ljvgk STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:40 [It] should invoke init containers on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:44 STEP: creating the pod Jul 9 19:18:31.273: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:18:40.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-ljvgk" for this suite. Jul 9 19:18:46.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:18:50.027: INFO: namespace: e2e-tests-init-container-ljvgk, resource: bindings, ignored listing per whitelist Jul 9 19:18:50.057: INFO: namespace e2e-tests-init-container-ljvgk deletion completed in 9.488643845s • [SLOW TEST:20.857 seconds] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should invoke init containers on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:44 ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:18:50.060: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:18:51.647: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-mtlqm STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating secret with name secret-test-96a7f343-83e7-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:18:52.378: INFO: Waiting up to 5m0s for pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-secrets-mtlqm" to be "success or failure" Jul 9 19:18:52.411: INFO: Pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.682627ms Jul 9 19:18:54.449: INFO: Pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07052138s Jul 9 19:18:56.479: INFO: Pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10078221s STEP: Saw pod success Jul 9 19:18:56.479: INFO: Pod "pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:18:56.530: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276 container secret-volume-test: STEP: delete the pod Jul 9 19:18:56.599: INFO: Waiting for pod pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276 to disappear Jul 9 19:18:56.627: INFO: Pod pod-secrets-96ac7b2f-83e7-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:18:56.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mtlqm" for this suite. Jul 9 19:19:02.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:19:05.783: INFO: namespace: e2e-tests-secrets-mtlqm, resource: bindings, ignored listing per whitelist Jul 9 19:19:06.308: INFO: namespace e2e-tests-secrets-mtlqm deletion completed in 9.638494379s • [SLOW TEST:16.248 seconds] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Area:Networking] network isolation when using a plugin that does not isolate namespaces by default should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:407 Jul 9 19:18:46.575: INFO: Could not check network plugin name: exit status 1. Assuming a non-OpenShift plugin [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:18:46.575: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-isolation1-f5qp8 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:18:48.506: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-isolation2-fvc9k STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 Jul 9 19:18:50.646: INFO: Using ip-10-0-130-54.us-west-2.compute.internal for test ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) Jul 9 19:18:52.794: INFO: Target pod IP:port is 10.2.2.72:8080 Jul 9 19:18:52.794: INFO: Creating an exec pod on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:18:52.794: INFO: Creating new exec pod Jul 9 19:18:56.956: INFO: Waiting up to 10s to wget 10.2.2.72:8080 Jul 9 19:18:56.956: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-tests-net-isolation2-fvc9k execpod-sourceip-ip-10-0-130-54.us-west-2.compute.internaltjbwd -- /bin/sh -c wget -T 30 -qO- 10.2.2.72:8080' Jul 9 19:18:57.616: INFO: stderr: "" Jul 9 19:18:57.616: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:18:57.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-f5qp8" for this suite. Jul 9 19:19:03.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:19:05.852: INFO: namespace: e2e-tests-net-isolation1-f5qp8, resource: bindings, ignored listing per whitelist Jul 9 19:19:07.599: INFO: namespace e2e-tests-net-isolation1-f5qp8 deletion completed in 9.836500252s [AfterEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:19:07.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-fvc9k" for this suite. Jul 9 19:19:13.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:19:15.952: INFO: namespace: e2e-tests-net-isolation2-fvc9k, resource: bindings, ignored listing per whitelist Jul 9 19:19:17.605: INFO: namespace e2e-tests-net-isolation2-fvc9k deletion completed in 9.968094137s • [SLOW TEST:31.301 seconds] [Area:Networking] network isolation /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:406 should allow communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:19:17.606: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:19:19.438: INFO: configPath is now "/tmp/e2e-test-unprivileged-router-q8gbw-user.kubeconfig" Jul 9 19:19:19.438: INFO: The user is now "e2e-test-unprivileged-router-q8gbw-user" Jul 9 19:19:19.438: INFO: Creating project "e2e-test-unprivileged-router-q8gbw" Jul 9 19:19:19.591: INFO: Waiting on permissions in project "e2e-test-unprivileged-router-q8gbw" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:41 Jul 9 19:19:19.625: INFO: Running 'oc new-app --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-unprivileged-router-q8gbw -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml -p=IMAGE=openshift/origin-haproxy-router -p=SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"]' warning: --param no longer accepts comma-separated lists of values. "SCOPE=[\"--name=test-unprivileged\", \"--namespace=$(POD_NAMESPACE)\", \"--loglevel=4\", \"--labels=select=first\", \"--update-status=false\"]" will be treated as a single key-value pair. --> Deploying template "e2e-test-unprivileged-router-q8gbw/" for "/tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml" to project e2e-test-unprivileged-router-q8gbw * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-unprivileged", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first", "--update-status=false"] --> Creating resources ... pod "router-scoped" created pod "router-override" created pod "router-override-domains" created rolebinding "system-router" created route "route-1" created route "route-2" created route "route-override-domain-1" created route "route-override-domain-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Access your application via route 'y.a.null.ptr' Access your application via route 'main.void.str' Run 'oc status' to view your app. [It] should run even if it has no access to update status [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:55 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:29 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:19:20.742: INFO: namespace : e2e-test-unprivileged-router-q8gbw api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:19:40.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] [23.219 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:19 The HAProxy router /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:54 should run even if it has no access to update status [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:55 test temporarily disabled /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/unprivileged.go:56 ------------------------------ [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables should successfully resolve valueFrom in s2i build environment variables [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:61 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:19:06.309: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:19:07.882: INFO: configPath is now "/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig" Jul 9 19:19:07.882: INFO: The user is now "e2e-test-build-valuefrom-frlkw-user" Jul 9 19:19:07.882: INFO: Creating project "e2e-test-build-valuefrom-frlkw" Jul 9 19:19:08.044: INFO: Waiting on permissions in project "e2e-test-build-valuefrom-frlkw" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:27 Jul 9 19:19:08.105: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:38 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:19:08.242: INFO: Running scan #0 Jul 9 19:19:08.242: INFO: Checking language ruby Jul 9 19:19:08.287: INFO: Checking tag latest Jul 9 19:19:08.287: INFO: Checking tag 2.0 Jul 9 19:19:08.287: INFO: Checking tag 2.2 Jul 9 19:19:08.287: INFO: Checking tag 2.3 Jul 9 19:19:08.287: INFO: Checking tag 2.4 Jul 9 19:19:08.287: INFO: Checking tag 2.5 Jul 9 19:19:08.287: INFO: Checking language nodejs Jul 9 19:19:08.327: INFO: Checking tag 4 Jul 9 19:19:08.327: INFO: Checking tag 6 Jul 9 19:19:08.327: INFO: Checking tag 8 Jul 9 19:19:08.327: INFO: Checking tag latest Jul 9 19:19:08.327: INFO: Checking tag 0.10 Jul 9 19:19:08.327: INFO: Checking language perl Jul 9 19:19:08.376: INFO: Checking tag 5.16 Jul 9 19:19:08.376: INFO: Checking tag 5.20 Jul 9 19:19:08.376: INFO: Checking tag 5.24 Jul 9 19:19:08.376: INFO: Checking tag latest Jul 9 19:19:08.376: INFO: Checking language php Jul 9 19:19:08.415: INFO: Checking tag 7.1 Jul 9 19:19:08.415: INFO: Checking tag latest Jul 9 19:19:08.415: INFO: Checking tag 5.5 Jul 9 19:19:08.415: INFO: Checking tag 5.6 Jul 9 19:19:08.415: INFO: Checking tag 7.0 Jul 9 19:19:08.415: INFO: Checking language python Jul 9 19:19:08.455: INFO: Checking tag latest Jul 9 19:19:08.455: INFO: Checking tag 2.7 Jul 9 19:19:08.455: INFO: Checking tag 3.3 Jul 9 19:19:08.455: INFO: Checking tag 3.4 Jul 9 19:19:08.456: INFO: Checking tag 3.5 Jul 9 19:19:08.456: INFO: Checking tag 3.6 Jul 9 19:19:08.456: INFO: Checking language wildfly Jul 9 19:19:08.510: INFO: Checking tag 10.1 Jul 9 19:19:08.510: INFO: Checking tag 11.0 Jul 9 19:19:08.510: INFO: Checking tag 12.0 Jul 9 19:19:08.510: INFO: Checking tag 8.1 Jul 9 19:19:08.510: INFO: Checking tag 9.0 Jul 9 19:19:08.510: INFO: Checking tag latest Jul 9 19:19:08.510: INFO: Checking tag 10.0 Jul 9 19:19:08.510: INFO: Checking language mysql Jul 9 19:19:08.550: INFO: Checking tag 5.6 Jul 9 19:19:08.550: INFO: Checking tag 5.7 Jul 9 19:19:08.550: INFO: Checking tag latest Jul 9 19:19:08.550: INFO: Checking tag 5.5 Jul 9 19:19:08.550: INFO: Checking language postgresql Jul 9 19:19:08.614: INFO: Checking tag 9.2 Jul 9 19:19:08.615: INFO: Checking tag 9.4 Jul 9 19:19:08.615: INFO: Checking tag 9.5 Jul 9 19:19:08.615: INFO: Checking tag 9.6 Jul 9 19:19:08.615: INFO: Checking tag latest Jul 9 19:19:08.615: INFO: Checking language mongodb Jul 9 19:19:08.657: INFO: Checking tag 2.6 Jul 9 19:19:08.657: INFO: Checking tag 3.2 Jul 9 19:19:08.657: INFO: Checking tag 3.4 Jul 9 19:19:08.657: INFO: Checking tag latest Jul 9 19:19:08.657: INFO: Checking tag 2.4 Jul 9 19:19:08.657: INFO: Checking language jenkins Jul 9 19:19:08.711: INFO: Checking tag 2 Jul 9 19:19:08.711: INFO: Checking tag latest Jul 9 19:19:08.711: INFO: Checking tag 1 Jul 9 19:19:08.711: INFO: Success! STEP: creating test image stream Jul 9 19:19:08.711: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-is.json' imagestream.image.openshift.io "test" created STEP: creating test secret Jul 9 19:19:08.959: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-secret.yaml' secret "mysecret" created STEP: creating test configmap Jul 9 19:19:09.260: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-configmap.yaml' configmap "myconfigmap" created [It] should successfully resolve valueFrom in s2i build environment variables [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:61 STEP: creating test successful build config Jul 9 19:19:09.551: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/successful-sti-build-value-from-config.yaml' buildconfig.build.openshift.io "mys2itest" created STEP: starting test build Jul 9 19:19:09.872: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw mys2itest -o=name' Jul 9 19:19:10.143: INFO: start-build output with args [mys2itest -o=name]: Error> StdOut> build/mys2itest-1 StdErr> Jul 9 19:19:10.144: INFO: Waiting for mys2itest-1 to complete Jul 9 19:19:36.260: INFO: Done waiting for mys2itest-1: util.BuildResult{BuildPath:"build/mys2itest-1", BuildName:"mys2itest-1", StartBuildStdErr:"", StartBuildStdOut:"build/mys2itest-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421e63b00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42096a1e0)} with error: Jul 9 19:19:36.260: INFO: Running 'oc logs --config=/tmp/e2e-test-build-valuefrom-frlkw-user.kubeconfig --namespace=e2e-test-build-valuefrom-frlkw -f build/mys2itest-1 --timestamps' [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:31 [AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:19:36.858: INFO: namespace : e2e-test-build-valuefrom-frlkw api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:19:42.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:36.620 seconds] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:13 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:26 should successfully resolve valueFrom in s2i build environment variables [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:61 ------------------------------ SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:18:03.500: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:18:05.498: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-e2e-kubelet-etc-hosts-mt788 STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 9 19:18:14.519: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mt788 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:18:14.520: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:18:14.844: INFO: Exec stderr: "" Jul 9 19:18:14.844: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mt788 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:18:14.844: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:18:15.158: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 9 19:18:15.159: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mt788 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:18:15.159: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:18:15.361: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Collecting events from namespace "e2e-tests-e2e-kubelet-etc-hosts-mt788". STEP: Found 19 events. Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:06 -0700 PDT - event for test-pod: {default-scheduler } Scheduled: Successfully assigned e2e-tests-e2e-kubelet-etc-hosts-mt788/test-pod to ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:07 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:08 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:09 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Failed: Error: failed to start container "busybox-3": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/tmp/etc-hosts291332576\\\" to rootfs \\\"/var/lib/docker/overlay2/28b4fd916cf3ee847aa8b641cf8791ffc74146925446e90f4b63b4739457a8a4/merged\\\" at \\\"/var/lib/docker/overlay2/28b4fd916cf3ee847aa8b641cf8791ffc74146925446e90f4b63b4739457a8a4/merged/etc/hosts\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:09 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:11 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Failed: Error: failed to start container "busybox-3": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/tmp/etc-hosts291332576\\\" to rootfs \\\"/var/lib/docker/overlay2/fb1e65fec82cd9b220a6b4c104980f69c2be4b9ca18b8032df78bc0a3c65cac6/merged\\\" at \\\"/var/lib/docker/overlay2/fb1e65fec82cd9b220a6b4c104980f69c2be4b9ca18b8032df78bc0a3c65cac6/merged/etc/hosts\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:11 -0700 PDT - event for test-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} BackOff: Back-off restarting failed container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:12 -0700 PDT - event for test-host-network-pod: {default-scheduler } Scheduled: Successfully assigned e2e-tests-e2e-kubelet-etc-hosts-mt788/test-host-network-pod to ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0" already present on machine Jul 9 19:18:15.400: INFO: At 2018-07-09 19:18:13 -0700 PDT - event for test-host-network-pod: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:18:15.580: INFO: POD NODE PHASE GRACE CONDITIONS Jul 9 19:18:15.580: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: mongodb-1-deploy ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:01 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:02 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:01 -0700 PDT }] Jul 9 19:18:15.580: INFO: mongodb-1-zkx79 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:02 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:02 -0700 PDT ContainersNotReady containers with unready status: [mongodb]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [mongodb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:02 -0700 PDT }] Jul 9 19:18:15.580: INFO: execpod98j4h ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT }] Jul 9 19:18:15.580: INFO: frontend-1-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:27 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:50 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:21 -0700 PDT }] Jul 9 19:18:15.580: INFO: test-host-network-pod ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:12 -0700 PDT }] Jul 9 19:18:15.580: INFO: test-pod ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:06 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:06 -0700 PDT ContainersNotReady containers with unready status: [busybox-3]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [busybox-3]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:18:06 -0700 PDT }] Jul 9 19:18:15.580: INFO: pod-host-path-test ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:27 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:27 -0700 PDT ContainersNotReady containers with unready status: [test-container-1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [test-container-1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:17:27 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:18:15.580: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:18:15.580: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:18:15.580: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }] Jul 9 19:18:15.580: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:18:15.580: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:18:15.580: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }] Jul 9 19:18:15.580: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:18:15.580: INFO: Jul 9 19:18:15.618: INFO: Logging node info for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:18:15.658: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:77546,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365150208 0} {} 8169092Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260292608 0} {} 8066692Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:18:06 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:bdaecec5adfa7c79e9525c0992fdab36c2d68066f5e91eff0d1d9e8d73c654ea gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8407119} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64@sha256:2edfad424a541b9e024f26368d3a5b7dcc1d7cd27a4ee8c1d8c3f81d9209ab2e gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6227659} {[openshift/hello-openshift@sha256:aaea76ff622d2f8bcb32e538e7b3cd0ef6d291953f3e7c9f556c1ba5baf47e2e openshift/hello-openshift:latest] 6089990}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:18:15.658: INFO: Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:18:15.703: INFO: Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:18:15.838: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.838: INFO: Container router ready: true, restart count 0 Jul 9 19:18:15.838: INFO: mongodb-1-deploy started at 2018-07-09 19:18:01 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.838: INFO: Container deployment ready: true, restart count 0 Jul 9 19:18:15.838: INFO: pod-host-path-test started at 2018-07-09 19:17:27 -0700 PDT (0+2 container statuses recorded) Jul 9 19:18:15.838: INFO: Container test-container-1 ready: false, restart count 0 Jul 9 19:18:15.838: INFO: Container test-container-2 ready: true, restart count 0 Jul 9 19:18:15.838: INFO: mongodb-1-zkx79 started at 2018-07-09 19:18:02 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.838: INFO: Container mongodb ready: false, restart count 0 Jul 9 19:18:15.838: INFO: frontend-1-build started at 2018-07-09 19:14:21 -0700 PDT (2+1 container statuses recorded) Jul 9 19:18:15.838: INFO: Init container git-clone ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Init container manage-dockerfile ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container sti-build ready: false, restart count 0 Jul 9 19:18:15.838: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.838: INFO: Container default-http-backend ready: true, restart count 0 Jul 9 19:18:15.838: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.838: INFO: Container registry ready: true, restart count 0 Jul 9 19:18:15.838: INFO: test-pod started at 2018-07-09 19:18:06 -0700 PDT (0+3 container statuses recorded) Jul 9 19:18:15.838: INFO: Container busybox-1 ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container busybox-2 ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container busybox-3 ready: false, restart count 1 Jul 9 19:18:15.838: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded) Jul 9 19:18:15.838: INFO: Container alert-buffer ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container alertmanager ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container alertmanager-proxy ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container alerts-proxy ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container prom-proxy ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container prometheus ready: true, restart count 0 Jul 9 19:18:15.838: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.838: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:18:15.838: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:18:15.838: INFO: Container metrics-server ready: true, restart count 0 Jul 9 19:18:15.838: INFO: Container metrics-server-nanny ready: true, restart count 0 Jul 9 19:18:15.838: INFO: test-host-network-pod started at 2018-07-09 19:18:12 -0700 PDT (0+2 container statuses recorded) Jul 9 19:18:15.838: INFO: Container busybox-1 ready: true, restart count 0 Jul 9 19:18:15.839: INFO: Container busybox-2 ready: true, restart count 0 Jul 9 19:18:15.839: INFO: execpod98j4h started at 2018-07-09 19:14:15 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.839: INFO: Container exec ready: true, restart count 0 Jul 9 19:18:15.839: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:18:15.839: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:18:15.839: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:18:15.839: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.839: INFO: Container directory-sync ready: true, restart count 0 Jul 9 19:18:15.839: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.839: INFO: Container webconsole ready: true, restart count 0 Jul 9 19:18:15.839: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:15.839: INFO: Container node-agent ready: true, restart count 3 W0709 19:18:15.878394 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:18:16.036: INFO: Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:18:16.036: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:39.952369s} Jul 9 19:18:16.036: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.138495s} Jul 9 19:18:16.036: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:18.561353s} Jul 9 19:18:16.036: INFO: Logging node info for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:18:16.083: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:77561,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:18:10 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:18:16.083: INFO: Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:18:16.120: INFO: Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:18:46.159: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250) Jul 9 19:18:46.159: INFO: Logging node info for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:18:46.202: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:77848,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:18:39 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:18:46.202: INFO: Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:18:46.238: INFO: Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:18:46.385: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container kube-addon-operator ready: true, restart count 0 Jul 9 19:18:46.385: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at (0+0 container statuses recorded) Jul 9 19:18:46.385: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container kube-controller-manager ready: true, restart count 1 Jul 9 19:18:46.385: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container kube-scheduler ready: true, restart count 0 Jul 9 19:18:46.385: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container tectonic-clu ready: true, restart count 0 Jul 9 19:18:46.385: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded) Jul 9 19:18:46.385: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0 Jul 9 19:18:46.385: INFO: Container tectonic-stats-emitter ready: true, restart count 0 Jul 9 19:18:46.385: INFO: Container tectonic-stats-extender ready: true, restart count 0 Jul 9 19:18:46.385: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container tectonic-channel-operator ready: true, restart count 0 Jul 9 19:18:46.385: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:18:46.385: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container openshift-apiserver ready: true, restart count 0 Jul 9 19:18:46.385: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container tectonic-network-operator ready: true, restart count 0 Jul 9 19:18:46.385: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded) Jul 9 19:18:46.385: INFO: Container dnsmasq ready: true, restart count 0 Jul 9 19:18:46.385: INFO: Container kubedns ready: true, restart count 0 Jul 9 19:18:46.385: INFO: Container sidecar ready: true, restart count 0 Jul 9 19:18:46.385: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container kube-apiserver ready: true, restart count 4 Jul 9 19:18:46.385: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container tectonic-node-controller ready: true, restart count 0 Jul 9 19:18:46.385: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container tectonic-alm-operator ready: true, restart count 0 Jul 9 19:18:46.385: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0 Jul 9 19:18:46.385: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container node-agent ready: true, restart count 4 Jul 9 19:18:46.385: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container tectonic-utility-operator ready: true, restart count 0 Jul 9 19:18:46.385: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container pod-checkpointer ready: true, restart count 0 Jul 9 19:18:46.385: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded) Jul 9 19:18:46.385: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:18:46.385: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:18:46.385: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container openshift-controller-manager ready: true, restart count 3 Jul 9 19:18:46.385: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container tectonic-node-controller-operator ready: true, restart count 0 Jul 9 19:18:46.385: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded) Jul 9 19:18:46.385: INFO: Container kube-core-operator ready: true, restart count 0 W0709 19:18:46.427641 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:18:46.567: INFO: Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal STEP: Dumping a list of prepulled images on each node... Jul 9 19:18:46.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-mt788" for this suite. Jul 9 19:19:44.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:19:47.721: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-mt788, resource: bindings, ignored listing per whitelist Jul 9 19:19:48.939: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-mt788 deletion completed in 1m2.271310319s • Failure [105.439 seconds] [k8s.io] KubeletManagedEtcHosts /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should test kubelet managed /etc/hosts file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 failed to execute command in pod test-pod, container busybox-3: unable to upgrade connection: container not found ("busybox-3") Expected error: <*errors.errorString | 0xc421007490>: { s: "unable to upgrade connection: container not found (\"busybox-3\")", } unable to upgrade connection: container not found ("busybox-3") not to have occurred /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/exec_util.go:104 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:19:40.826: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:19:42.560: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-mq4jz STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name configmap-test-volume-map-b4fc7a45-83e7-11e8-8fe2-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:19:43.269: INFO: Waiting up to 5m0s for pod "pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-configmap-mq4jz" to be "success or failure" Jul 9 19:19:43.308: INFO: Pod "pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.425503ms Jul 9 19:19:45.341: INFO: Pod "pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.071906859s STEP: Saw pod success Jul 9 19:19:45.341: INFO: Pod "pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:19:45.374: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:19:45.459: INFO: Waiting for pod pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:19:45.490: INFO: Pod pod-configmaps-b50214c9-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:19:45.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mq4jz" for this suite. Jul 9 19:19:51.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:19:55.501: INFO: namespace: e2e-tests-configmap-mq4jz, resource: bindings, ignored listing per whitelist Jul 9 19:19:55.605: INFO: namespace e2e-tests-configmap-mq4jz deletion completed in 10.039097537s • [SLOW TEST:14.779 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] Projected should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:469 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:19:55.608: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:19:57.646: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-zd4xr STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:469 STEP: Creating configMap with name projected-configmap-test-volume-map-bdf766a8-83e7-11e8-8fe2-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:19:58.338: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-zd4xr" to be "success or failure" Jul 9 19:19:58.369: INFO: Pod "pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.096945ms Jul 9 19:20:00.402: INFO: Pod "pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063701216s STEP: Saw pod success Jul 9 19:20:00.402: INFO: Pod "pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:20:00.441: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:20:00.515: INFO: Waiting for pod pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:20:00.548: INFO: Pod pod-projected-configmaps-bdfcfa30-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:00.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zd4xr" for this suite. Jul 9 19:20:06.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:20:08.966: INFO: namespace: e2e-tests-projected-zd4xr, resource: bindings, ignored listing per whitelist Jul 9 19:20:10.513: INFO: namespace e2e-tests-projected-zd4xr deletion completed in 9.924000537s • [SLOW TEST:14.905 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:469 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with failing hook [Conformance] should get all logs from retried hooks [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:819 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:19:48.942: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:19:50.938: INFO: configPath is now "/tmp/e2e-test-cli-deployment-rhvgs-user.kubeconfig" Jul 9 19:19:50.938: INFO: The user is now "e2e-test-cli-deployment-rhvgs-user" Jul 9 19:19:50.938: INFO: Creating project "e2e-test-cli-deployment-rhvgs" Jul 9 19:19:51.216: INFO: Waiting on permissions in project "e2e-test-cli-deployment-rhvgs" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should get all logs from retried hooks [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:819 Jul 9 19:19:55.728: INFO: Running 'oc logs --config=/tmp/e2e-test-cli-deployment-rhvgs-user.kubeconfig --namespace=e2e-test-cli-deployment-rhvgs dc/hook' STEP: checking the logs for substrings --> pre: Running hook pod ... pre hook logs --> pre: Retrying hook pod (retry #1) pre hook logs [AfterEach] with failing hook [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:815 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:19:58.080: INFO: namespace : e2e-test-cli-deployment-rhvgs api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:16.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:27.219 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with failing hook [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:813 should get all logs from retried hooks [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:819 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:20:16.165: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:20:18.277: INFO: configPath is now "/tmp/e2e-test-resolve-local-names-m7cbv-user.kubeconfig" Jul 9 19:20:18.277: INFO: The user is now "e2e-test-resolve-local-names-m7cbv-user" Jul 9 19:20:18.277: INFO: Creating project "e2e-test-resolve-local-names-m7cbv" Jul 9 19:20:18.629: INFO: Waiting on permissions in project "e2e-test-resolve-local-names-m7cbv" ... [It] should update standard Kube object image fields when local names are on [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:19 Jul 9 19:20:18.671: INFO: Running 'oc import-image --config=/tmp/e2e-test-resolve-local-names-m7cbv-user.kubeconfig --namespace=e2e-test-resolve-local-names-m7cbv busybox:latest --confirm' The import completed successfully. Name: busybox Namespace: e2e-test-resolve-local-names-m7cbv Created: Less than a second ago Labels: Annotations: openshift.io/image.dockerRepositoryCheck=2018-07-10T02:20:20Z Docker Pull Spec: docker-registry.default.svc:5000/e2e-test-resolve-local-names-m7cbv/busybox Image Lookup: local=false Unique Images: 1 Tags: 1 latest tagged from busybox:latest * busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 Less than a second ago Image Name: busybox:latest Docker Image: busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 Name: sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 Created: Less than a second ago Annotations: image.openshift.io/dockerLayersOrder=ascending Image Size: 724.6kB Image Created: 6 weeks ago Author: Arch: amd64 Command: sh Working Dir: User: Exposes Ports: Docker Labels: Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Jul 9 19:20:20.358: INFO: Running 'oc set image-lookup --config=/tmp/e2e-test-resolve-local-names-m7cbv-user.kubeconfig --namespace=e2e-test-resolve-local-names-m7cbv busybox' imagestream "busybox" updated [AfterEach] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:20:20.894: INFO: namespace : e2e-test-resolve-local-names-m7cbv api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:26.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] [10.827 seconds] [Feature:ImageLookup][registry] Image policy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:14 should update standard Kube object image fields when local names are on [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:19 default image resolution is not configured, can't verify pod resolution /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/resolve.go:43 ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:20:10.518: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:20:12.297: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-gp26s STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating secret with name secret-test-c6b47a70-83e7-11e8-8fe2-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:20:12.999: INFO: Waiting up to 5m0s for pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-secrets-gp26s" to be "success or failure" Jul 9 19:20:13.038: INFO: Pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.646084ms Jul 9 19:20:15.071: INFO: Pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072119091s Jul 9 19:20:17.105: INFO: Pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10677299s STEP: Saw pod success Jul 9 19:20:17.105: INFO: Pod "pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:20:17.157: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276 container secret-env-test: STEP: delete the pod Jul 9 19:20:17.233: INFO: Waiting for pod pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:20:17.276: INFO: Pod pod-secrets-c6b9ab6b-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:17.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gp26s" for this suite. Jul 9 19:20:23.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:20:26.387: INFO: namespace: e2e-tests-secrets-gp26s, resource: bindings, ignored listing per whitelist Jul 9 19:20:27.380: INFO: namespace e2e-tests-secrets-gp26s deletion completed in 10.067981102s • [SLOW TEST:16.862 seconds] [sig-api-machinery] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:30 should be consumable from pods in env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [sig-storage] Projected should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:20:26.993: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:20:28.974: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-lknkb STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating secret with name projected-secret-test-d0c46100-83e7-11e8-992b-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:20:29.903: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276" in namespace "e2e-tests-projected-lknkb" to be "success or failure" Jul 9 19:20:29.944: INFO: Pod "pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 40.806839ms Jul 9 19:20:32.008: INFO: Pod "pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104830832s STEP: Saw pod success Jul 9 19:20:32.008: INFO: Pod "pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:20:32.058: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 container secret-volume-test: STEP: delete the pod Jul 9 19:20:32.142: INFO: Waiting for pod pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 to disappear Jul 9 19:20:32.178: INFO: Pod pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:32.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lknkb" for this suite. Jul 9 19:20:38.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:20:40.743: INFO: namespace: e2e-tests-projected-lknkb, resource: bindings, ignored listing per whitelist Jul 9 19:20:42.537: INFO: namespace e2e-tests-projected-lknkb deletion completed in 10.317801375s • [SLOW TEST:15.544 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable in multiple volumes in a pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Feature:Builds][pullsecret][Conformance] docker build using a pull secret Building from a template should create a docker build that pulls using a secret run it [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:44 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:20:27.383: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:20:29.233: INFO: configPath is now "/tmp/e2e-test-docker-build-pullsecret-r2mt4-user.kubeconfig" Jul 9 19:20:29.233: INFO: The user is now "e2e-test-docker-build-pullsecret-r2mt4-user" Jul 9 19:20:29.233: INFO: Creating project "e2e-test-docker-build-pullsecret-r2mt4" Jul 9 19:20:29.392: INFO: Waiting on permissions in project "e2e-test-docker-build-pullsecret-r2mt4" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:26 Jul 9 19:20:29.469: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:30 STEP: waiting for builder service account [It] should create a docker build that pulls using a secret run it [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:44 STEP: calling oc create -f "/tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-docker-build-pullsecret.json" Jul 9 19:20:29.604: INFO: Running 'oc create --config=/tmp/e2e-test-docker-build-pullsecret-r2mt4-user.kubeconfig --namespace=e2e-test-docker-build-pullsecret-r2mt4 -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-docker-build-pullsecret.json' imagestream.image.openshift.io "image1" created buildconfig.build.openshift.io "docker-build" created buildconfig.build.openshift.io "docker-build-pull" created STEP: starting a build Jul 9 19:20:29.987: INFO: Running 'oc start-build --config=/tmp/e2e-test-docker-build-pullsecret-r2mt4-user.kubeconfig --namespace=e2e-test-docker-build-pullsecret-r2mt4 docker-build -o=name' Jul 9 19:20:30.249: INFO: start-build output with args [docker-build -o=name]: Error> StdOut> build/docker-build-1 StdErr> Jul 9 19:20:30.251: INFO: Waiting for docker-build-1 to complete Jul 9 19:20:36.331: INFO: Done waiting for docker-build-1: util.BuildResult{BuildPath:"build/docker-build-1", BuildName:"docker-build-1", StartBuildStdErr:"", StartBuildStdOut:"build/docker-build-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421140900), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004c5a0)} with error: STEP: starting a second build that pulls the image from the first build Jul 9 19:20:36.331: INFO: Running 'oc start-build --config=/tmp/e2e-test-docker-build-pullsecret-r2mt4-user.kubeconfig --namespace=e2e-test-docker-build-pullsecret-r2mt4 docker-build-pull -o=name' Jul 9 19:20:36.645: INFO: start-build output with args [docker-build-pull -o=name]: Error> StdOut> build/docker-build-pull-1 StdErr> Jul 9 19:20:36.646: INFO: Waiting for docker-build-pull-1 to complete Jul 9 19:20:42.742: INFO: Done waiting for docker-build-pull-1: util.BuildResult{BuildPath:"build/docker-build-pull-1", BuildName:"docker-build-pull-1", StartBuildStdErr:"", StartBuildStdOut:"build/docker-build-pull-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420bb4f00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004c5a0)} with error: [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:36 [AfterEach] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:20:42.812: INFO: namespace : e2e-test-docker-build-pullsecret-r2mt4 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:48.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:21.558 seconds] [Feature:Builds][pullsecret][Conformance] docker build using a pull secret /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:12 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:24 Building from a template /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:43 should create a docker build that pulls using a secret run it [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:44 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:20:48.947: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:20:50.734: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-hfpvx STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:20:51.453: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-hfpvx" to be "success or failure" Jul 9 19:20:51.493: INFO: Pod "downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.998508ms Jul 9 19:20:53.529: INFO: Pod "downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075876986s STEP: Saw pod success Jul 9 19:20:53.529: INFO: Pod "downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:20:53.565: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:20:53.637: INFO: Waiting for pod downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:20:53.669: INFO: Pod downwardapi-volume-dda26113-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:53.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hfpvx" for this suite. Jul 9 19:20:59.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:03.187: INFO: namespace: e2e-tests-downward-api-hfpvx, resource: bindings, ignored listing per whitelist Jul 9 19:21:03.809: INFO: namespace e2e-tests-downward-api-hfpvx deletion completed in 10.105937921s • [SLOW TEST:14.862 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] HostPath should support subPath [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:89 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:17:24.970: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:17:26.667: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-hostpath-2j5jw STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should support subPath [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:89 STEP: Creating a pod to test hostPath subPath Jul 9 19:17:27.363: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-2j5jw" to be "success or failure" Jul 9 19:17:27.412: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 48.766725ms Jul 9 19:17:29.443: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2.080036384s Jul 9 19:17:31.472: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.10831245s Jul 9 19:17:33.500: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.137065362s Jul 9 19:17:35.529: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.165850876s Jul 9 19:17:37.584: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.220718947s Jul 9 19:17:39.613: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 12.250143104s Jul 9 19:17:41.652: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 14.288646069s Jul 9 19:17:43.682: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 16.318657765s Jul 9 19:17:45.716: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 18.353057718s Jul 9 19:17:47.748: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 20.384946151s Jul 9 19:17:49.778: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 22.415270934s Jul 9 19:17:51.875: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 24.511994921s Jul 9 19:17:53.910: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 26.546765122s Jul 9 19:17:55.966: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 28.602672218s Jul 9 19:17:57.994: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 30.63105058s Jul 9 19:18:00.022: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 32.658880066s Jul 9 19:18:02.060: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 34.696653256s Jul 9 19:18:04.090: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 36.726588775s Jul 9 19:18:06.125: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 38.761327977s Jul 9 19:18:08.157: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 40.793418512s Jul 9 19:18:10.187: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 42.823558149s Jul 9 19:18:12.219: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 44.855600786s Jul 9 19:18:14.247: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 46.883459371s Jul 9 19:18:16.279: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 48.916003037s Jul 9 19:18:18.308: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 50.945136203s Jul 9 19:18:20.346: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 52.982891108s Jul 9 19:18:22.375: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 55.012244154s Jul 9 19:18:24.405: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 57.04207056s Jul 9 19:18:26.441: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 59.077526327s Jul 9 19:18:28.481: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m1.118107634s Jul 9 19:18:30.517: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m3.153982095s Jul 9 19:18:32.568: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m5.204918284s Jul 9 19:18:34.606: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m7.242487448s Jul 9 19:18:36.649: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m9.285458494s Jul 9 19:18:38.696: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m11.332911416s Jul 9 19:18:40.743: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m13.380168508s Jul 9 19:18:42.772: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m15.409187535s Jul 9 19:18:44.810: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m17.447200031s Jul 9 19:18:46.872: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m19.508415647s Jul 9 19:18:48.899: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m21.536103758s Jul 9 19:18:50.930: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m23.566458835s Jul 9 19:18:52.958: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m25.594760198s Jul 9 19:18:54.994: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m27.630472752s Jul 9 19:18:57.024: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m29.660624298s Jul 9 19:18:59.056: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m31.692742004s Jul 9 19:19:01.087: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m33.72369698s Jul 9 19:19:03.138: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m35.775192157s Jul 9 19:19:05.168: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m37.805126824s Jul 9 19:19:07.198: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m39.83462179s Jul 9 19:19:09.245: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m41.881375253s Jul 9 19:19:11.274: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m43.910492024s Jul 9 19:19:13.308: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m45.944920471s Jul 9 19:19:15.359: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m47.996208568s Jul 9 19:19:17.413: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.049810858s Jul 9 19:19:19.456: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.092516178s Jul 9 19:19:21.485: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.121965085s Jul 9 19:19:23.516: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.153138837s Jul 9 19:19:25.545: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.181907531s Jul 9 19:19:27.578: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.215177602s Jul 9 19:19:29.623: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.259343602s Jul 9 19:19:31.653: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.289689214s Jul 9 19:19:33.683: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.319717386s Jul 9 19:19:35.718: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.354924772s Jul 9 19:19:37.754: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.390980101s Jul 9 19:19:39.785: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.422225392s Jul 9 19:19:41.816: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.452619505s Jul 9 19:19:43.846: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.482444513s Jul 9 19:19:45.885: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.521624607s Jul 9 19:19:47.915: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.551836216s Jul 9 19:19:49.946: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.583179984s Jul 9 19:19:51.975: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.611292454s Jul 9 19:19:54.004: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.640859662s Jul 9 19:19:56.033: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.66970714s Jul 9 19:19:58.061: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.698201213s Jul 9 19:20:00.092: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.72898788s Jul 9 19:20:02.125: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.761317261s Jul 9 19:20:04.155: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.791796274s Jul 9 19:20:06.184: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.820527217s Jul 9 19:20:08.223: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.85949819s Jul 9 19:20:10.254: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.890730071s Jul 9 19:20:12.289: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.92582505s Jul 9 19:20:14.323: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.959632845s Jul 9 19:20:16.355: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.991470927s Jul 9 19:20:18.388: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m51.024328865s Jul 9 19:20:20.423: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m53.059502388s Jul 9 19:20:22.470: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m55.106788934s Jul 9 19:20:24.508: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m57.144505221s Jul 9 19:20:26.537: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m59.173553392s Jul 9 19:20:28.572: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 3m1.209025858s Jul 9 19:20:30.605: INFO: Pod "pod-host-path-test": Phase="Failed", Reason="", readiness=false. Elapsed: 3m3.242171498s Jul 9 19:20:30.665: INFO: Output of node "ip-10-0-130-54.us-west-2.compute.internal" pod "pod-host-path-test" container "test-container-1": content of file "/test-volume/test-file": mount-tester new file mode of file "/test-volume/test-file": -rw-r--r-- Jul 9 19:20:30.734: INFO: Output of node "ip-10-0-130-54.us-west-2.compute.internal" pod "pod-host-path-test" container "test-container-2": Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying Error reading file /test-volume/sub-path/test-file: open /test-volume/sub-path/test-file: no such file or directory, retrying STEP: delete the pod Jul 9 19:20:30.894: INFO: Waiting for pod pod-host-path-test to disappear Jul 9 19:20:30.929: INFO: Pod pod-host-path-test no longer exists Jul 9 19:20:30.929: INFO: Unexpected error occurred: expected pod "pod-host-path-test" success: pod "pod-host-path-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.0.130.54 PodIP:10.2.2.61 StartTime:2018-07-09 19:17:27 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-1 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:17:28 -0700 PDT,ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8} {Name:test-container-2 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:20:28 -0700 PDT,ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd}] QOSClass:BestEffort} [AfterEach] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Collecting events from namespace "e2e-tests-hostpath-2j5jw". STEP: Found 7 events. Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:27 -0700 PDT - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned e2e-tests-hostpath-2j5jw/pod-host-path-test to ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0" already present on machine Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0" already present on machine Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:20:30.965: INFO: At 2018-07-09 19:17:28 -0700 PDT - event for pod-host-path-test: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:20:31.095: INFO: POD NODE PHASE GRACE CONDITIONS Jul 9 19:20:31.095: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: docker-build-1-build ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:30 -0700 PDT ContainersNotInitialized containers with incomplete status: [manage-dockerfile]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:30 -0700 PDT ContainersNotReady containers with unready status: [docker-build]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [docker-build]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:30 -0700 PDT }] Jul 9 19:20:31.095: INFO: execpod98j4h ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:15 -0700 PDT }] Jul 9 19:20:31.095: INFO: frontend-1-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:27 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:50 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:14:21 -0700 PDT }] Jul 9 19:20:31.095: INFO: pod-configmaps-b625b422-83e7-11e8-bd2e-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:19:45 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:19:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:19:45 -0700 PDT }] Jul 9 19:20:31.095: INFO: pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:29 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:29 -0700 PDT ContainersNotReady containers with unready status: [secret-volume-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [secret-volume-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:20:29 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:20:31.095: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:20:31.095: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:20:31.095: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }] Jul 9 19:20:31.095: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:20:31.095: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:20:31.095: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }] Jul 9 19:20:31.095: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:20:31.095: INFO: Jul 9 19:20:31.137: INFO: Logging node info for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:20:31.173: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:79048,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365150208 0} {} 8169092Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260292608 0} {} 8066692Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:20:27 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test@sha256:ee11e7c7dbb2d609aaa42c8806ef1bf5663df95dd925e6ab424b4439dbaf75fd docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test:latest] 613134548} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:91955c14f978a0f48918eecc8b3772faf1615e943daccf9bb051a51cba30422f] 465041680} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:bdaecec5adfa7c79e9525c0992fdab36c2d68066f5e91eff0d1d9e8d73c654ea gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8407119}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:20:31.173: INFO: Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:20:31.202: INFO: Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:20:31.311: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded) Jul 9 19:20:31.311: INFO: Container alert-buffer ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Container alertmanager ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Container alertmanager-proxy ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Container alerts-proxy ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Container prom-proxy ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Container prometheus ready: true, restart count 0 Jul 9 19:20:31.311: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:20:31.311: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container registry ready: true, restart count 0 Jul 9 19:20:31.311: INFO: docker-build-1-build started at 2018-07-09 19:20:30 -0700 PDT (1+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Init container manage-dockerfile ready: false, restart count 0 Jul 9 19:20:31.311: INFO: Container docker-build ready: false, restart count 0 Jul 9 19:20:31.311: INFO: execpod98j4h started at 2018-07-09 19:14:15 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container exec ready: true, restart count 0 Jul 9 19:20:31.311: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:20:31.311: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:20:31.311: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:20:31.311: INFO: Container metrics-server ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Container metrics-server-nanny ready: true, restart count 0 Jul 9 19:20:31.311: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container webconsole ready: true, restart count 0 Jul 9 19:20:31.311: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container node-agent ready: true, restart count 3 Jul 9 19:20:31.311: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container directory-sync ready: true, restart count 0 Jul 9 19:20:31.311: INFO: pod-projected-secrets-d0caac8f-83e7-11e8-992b-28d244b00276 started at 2018-07-09 19:20:29 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container secret-volume-test ready: false, restart count 0 Jul 9 19:20:31.311: INFO: pod-configmaps-b625b422-83e7-11e8-bd2e-28d244b00276 started at 2018-07-09 19:19:45 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container configmap-volume-test ready: true, restart count 0 Jul 9 19:20:31.311: INFO: frontend-1-build started at 2018-07-09 19:14:21 -0700 PDT (2+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Init container git-clone ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Init container manage-dockerfile ready: true, restart count 0 Jul 9 19:20:31.311: INFO: Container sti-build ready: false, restart count 0 Jul 9 19:20:31.311: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container default-http-backend ready: true, restart count 0 Jul 9 19:20:31.311: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:20:31.311: INFO: Container router ready: true, restart count 0 W0709 19:20:31.345070 11714 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:20:31.473: INFO: Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:20:31.473: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.274794s} Jul 9 19:20:31.473: INFO: Logging node info for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:20:31.505: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:79139,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:20:30 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:20:31.505: INFO: Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:20:31.534: INFO: Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:21:01.575: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250) Jul 9 19:21:01.575: INFO: Logging node info for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:21:01.605: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:79463,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:20:59 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:21:01.605: INFO: Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:21:01.638: INFO: Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:21:01.741: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded) Jul 9 19:21:01.741: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0 Jul 9 19:21:01.741: INFO: Container tectonic-stats-emitter ready: true, restart count 0 Jul 9 19:21:01.741: INFO: Container tectonic-stats-extender ready: true, restart count 0 Jul 9 19:21:01.741: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container tectonic-channel-operator ready: true, restart count 0 Jul 9 19:21:01.741: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:21:01.741: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container openshift-apiserver ready: true, restart count 0 Jul 9 19:21:01.741: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container tectonic-network-operator ready: true, restart count 0 Jul 9 19:21:01.741: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded) Jul 9 19:21:01.741: INFO: Container dnsmasq ready: true, restart count 0 Jul 9 19:21:01.741: INFO: Container kubedns ready: true, restart count 0 Jul 9 19:21:01.741: INFO: Container sidecar ready: true, restart count 0 Jul 9 19:21:01.741: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container kube-scheduler ready: true, restart count 0 Jul 9 19:21:01.741: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container tectonic-clu ready: true, restart count 0 Jul 9 19:21:01.741: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container kube-apiserver ready: true, restart count 4 Jul 9 19:21:01.741: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container tectonic-node-controller ready: true, restart count 0 Jul 9 19:21:01.741: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container tectonic-alm-operator ready: true, restart count 0 Jul 9 19:21:01.741: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0 Jul 9 19:21:01.741: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container node-agent ready: true, restart count 4 Jul 9 19:21:01.741: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container pod-checkpointer ready: true, restart count 0 Jul 9 19:21:01.741: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded) Jul 9 19:21:01.741: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:21:01.741: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:21:01.741: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container openshift-controller-manager ready: true, restart count 3 Jul 9 19:21:01.741: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container tectonic-node-controller-operator ready: true, restart count 0 Jul 9 19:21:01.741: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container kube-core-operator ready: true, restart count 0 Jul 9 19:21:01.741: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container tectonic-utility-operator ready: true, restart count 0 Jul 9 19:21:01.741: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container kube-addon-operator ready: true, restart count 0 Jul 9 19:21:01.741: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at (0+0 container statuses recorded) Jul 9 19:21:01.741: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:21:01.741: INFO: Container kube-controller-manager ready: true, restart count 1 W0709 19:21:01.775908 11714 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:21:01.867: INFO: Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal STEP: Dumping a list of prepulled images on each node... Jul 9 19:21:01.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-2j5jw" for this suite. Jul 9 19:21:08.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:10.794: INFO: namespace: e2e-tests-hostpath-2j5jw, resource: bindings, ignored listing per whitelist Jul 9 19:21:11.468: INFO: namespace e2e-tests-hostpath-2j5jw deletion completed in 9.534952564s • Failure [226.498 seconds] [sig-storage] HostPath /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should support subPath [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:89 Expected error: <*errors.errorString | 0xc422109f10>: { s: "expected pod \"pod-host-path-test\" success: pod \"pod-host-path-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.0.130.54 PodIP:10.2.2.61 StartTime:2018-07-09 19:17:27 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-1 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:17:28 -0700 PDT,ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8} {Name:test-container-2 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:20:28 -0700 PDT,ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd}] QOSClass:BestEffort}", } expected pod "pod-host-path-test" success: pod "pod-host-path-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-09 19:17:27 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.0.130.54 PodIP:10.2.2.61 StartTime:2018-07-09 19:17:27 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-1 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:17:28 -0700 PDT,ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://2cd77b44fb6fdc32e044424e85163cc9d9a912bcc3ab095019a727af01cab8f8} {Name:test-container-2 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2018-07-09 19:17:28 -0700 PDT,FinishedAt:2018-07-09 19:20:28 -0700 PDT,ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:dc4e2dcfbde16249c4662de673295d00778577bc2e2ca7013a1b85d4f47398ca ContainerID:docker://f08d07a6f13f69f8f0450200ecff44ec62016d3cee4a8dc6778b39ab9588becd}] QOSClass:BestEffort} not to have occurred /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:2290 ------------------------------ [sig-storage] Downward API volume should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:20:42.538: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:20:44.601: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-qpnsn STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating the pod Jul 9 19:20:48.175: INFO: Successfully updated pod "labelsupdateda06d733-83e7-11e8-992b-28d244b00276" [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:50.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qpnsn" for this suite. Jul 9 19:21:12.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:15.398: INFO: namespace: e2e-tests-downward-api-qpnsn, resource: bindings, ignored listing per whitelist Jul 9 19:21:16.709: INFO: namespace e2e-tests-downward-api-qpnsn deletion completed in 26.411067994s • [SLOW TEST:34.170 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-api-machinery] Downward API should provide pod UID as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:03.810: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:21:05.487: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-bcvkn STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward api env vars Jul 9 19:21:06.209: INFO: Waiting up to 5m0s for pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-bcvkn" to be "success or failure" Jul 9 19:21:06.241: INFO: Pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.021781ms Jul 9 19:21:08.278: INFO: Pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068938105s Jul 9 19:21:10.317: INFO: Pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107594978s STEP: Saw pod success Jul 9 19:21:10.317: INFO: Pod "downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:21:10.352: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276 container dapi-container: STEP: delete the pod Jul 9 19:21:10.434: INFO: Waiting for pod downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:21:10.471: INFO: Pod downward-api-e67058f8-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:21:10.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bcvkn" for this suite. Jul 9 19:21:16.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:19.200: INFO: namespace: e2e-tests-downward-api-bcvkn, resource: bindings, ignored listing per whitelist Jul 9 19:21:20.501: INFO: namespace e2e-tests-downward-api-bcvkn deletion completed in 9.989426573s • [SLOW TEST:16.691 seconds] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:37 should provide pod UID as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:19:42.933: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:19:44.473: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-bc6g9 STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:19:45.108: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node STEP: Creating configMap with name configmap-test-upd-b620d07a-83e7-11e8-bd2e-28d244b00276 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b620d07a-83e7-11e8-bd2e-28d244b00276 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:20:58.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bc6g9" for this suite. Jul 9 19:21:20.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:23.203: INFO: namespace: e2e-tests-configmap-bc6g9, resource: bindings, ignored listing per whitelist Jul 9 19:21:24.227: INFO: namespace e2e-tests-configmap-bc6g9 deletion completed in 25.372270298s • [SLOW TEST:101.294 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:20.503: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:21:22.197: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-containers-f9hmf STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test override all Jul 9 19:21:22.891: INFO: Waiting up to 5m0s for pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-containers-f9hmf" to be "success or failure" Jul 9 19:21:22.922: INFO: Pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.522108ms Jul 9 19:21:24.953: INFO: Pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061966293s Jul 9 19:21:26.984: INFO: Pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093040223s STEP: Saw pod success Jul 9 19:21:26.984: INFO: Pod "client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:21:27.020: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:21:27.092: INFO: Waiting for pod client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:21:27.124: INFO: Pod client-containers-f06198fc-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:21:27.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-f9hmf" for this suite. Jul 9 19:21:33.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:36.736: INFO: namespace: e2e-tests-containers-f9hmf, resource: bindings, ignored listing per whitelist Jul 9 19:21:37.289: INFO: namespace e2e-tests-containers-f9hmf deletion completed in 10.128596748s • [SLOW TEST:16.786 seconds] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should be able to override the image's default command and arguments [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:87 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:24.230: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:21:25.848: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-pjp6w STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:87 Jul 9 19:21:26.743: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secret-namespace-gr7ll STEP: Creating projection with secret that has name projected-secret-test-f29dc6de-83e7-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:21:27.344: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276" in namespace "e2e-tests-projected-pjp6w" to be "success or failure" Jul 9 19:21:27.373: INFO: Pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 28.949029ms Jul 9 19:21:29.402: INFO: Pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058308202s Jul 9 19:21:31.439: INFO: Pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095428007s STEP: Saw pod success Jul 9 19:21:31.439: INFO: Pod "pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:21:31.470: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276 container projected-secret-volume-test: STEP: delete the pod Jul 9 19:21:31.543: INFO: Waiting for pod pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276 to disappear Jul 9 19:21:31.572: INFO: Pod pod-projected-secrets-f30ac01d-83e7-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:21:31.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pjp6w" for this suite. Jul 9 19:21:37.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:39.688: INFO: namespace: e2e-tests-projected-pjp6w, resource: bindings, ignored listing per whitelist Jul 9 19:21:41.290: INFO: namespace e2e-tests-projected-pjp6w deletion completed in 9.679346612s STEP: Destroying namespace "e2e-tests-secret-namespace-gr7ll" for this suite. Jul 9 19:21:47.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:49.741: INFO: namespace: e2e-tests-secret-namespace-gr7ll, resource: bindings, ignored listing per whitelist Jul 9 19:21:50.910: INFO: namespace e2e-tests-secret-namespace-gr7ll deletion completed in 9.619386792s • [SLOW TEST:26.680 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:87 ------------------------------ [sig-storage] Projected should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:37.290: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:21:39.041: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-k98n9 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:21:39.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-k98n9" to be "success or failure" Jul 9 19:21:39.790: INFO: Pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 30.60253ms Jul 9 19:21:41.824: INFO: Pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064754136s Jul 9 19:21:43.862: INFO: Pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102430482s STEP: Saw pod success Jul 9 19:21:43.862: INFO: Pod "downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:21:43.893: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:21:43.966: INFO: Waiting for pod downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276 to disappear Jul 9 19:21:44.000: INFO: Pod downwardapi-volume-fa70d37c-83e7-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:21:44.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k98n9" for this suite. Jul 9 19:21:50.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:21:53.641: INFO: namespace: e2e-tests-projected-k98n9, resource: bindings, ignored listing per whitelist Jul 9 19:21:54.031: INFO: namespace e2e-tests-projected-k98n9 deletion completed in 9.979733746s • [SLOW TEST:16.741 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide podname only [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Feature:Builds] build have source revision metadata started build should contain source revision information [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:41 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds] build have source revision metadata /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:11.475: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds] build have source revision metadata /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:21:13.068: INFO: configPath is now "/tmp/e2e-test-cli-build-revision-fnrv9-user.kubeconfig" Jul 9 19:21:13.068: INFO: The user is now "e2e-test-cli-build-revision-fnrv9-user" Jul 9 19:21:13.068: INFO: Creating project "e2e-test-cli-build-revision-fnrv9" Jul 9 19:21:13.247: INFO: Waiting on permissions in project "e2e-test-cli-build-revision-fnrv9" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:22 Jul 9 19:21:13.309: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:26 STEP: waiting for builder service account Jul 9 19:21:13.452: INFO: Running 'oc create --config=/tmp/e2e-test-cli-build-revision-fnrv9-user.kubeconfig --namespace=e2e-test-cli-build-revision-fnrv9 -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/test-build-revision.json' buildconfig.build.openshift.io "sample-build" created [It] should contain source revision information [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:41 STEP: starting the build Jul 9 19:21:13.718: INFO: Running 'oc start-build --config=/tmp/e2e-test-cli-build-revision-fnrv9-user.kubeconfig --namespace=e2e-test-cli-build-revision-fnrv9 sample-build -o=name' Jul 9 19:21:13.999: INFO: start-build output with args [sample-build -o=name]: Error> StdOut> build/sample-build-1 StdErr> Jul 9 19:21:14.000: INFO: Waiting for sample-build-1 to complete Jul 9 19:21:50.092: INFO: Done waiting for sample-build-1: util.BuildResult{BuildPath:"build/sample-build-1", BuildName:"sample-build-1", StartBuildStdErr:"", StartBuildStdOut:"build/sample-build-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4214f4c00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e3860)} with error: STEP: verifying the status of "build/sample-build-1" [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:33 [AfterEach] [Feature:Builds] build have source revision metadata /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:21:50.244: INFO: namespace : e2e-test-cli-build-revision-fnrv9 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds] build have source revision metadata /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:21:56.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:44.870 seconds] [Feature:Builds] build have source revision metadata /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:14 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:21 started build /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:40 should contain source revision information [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:41 ------------------------------ S ------------------------------ [Feature:Builds][Conformance] imagechangetriggers imagechangetriggers should trigger builds of all types [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:42 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] imagechangetriggers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:54.032: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] imagechangetriggers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:21:55.831: INFO: configPath is now "/tmp/e2e-test-imagechangetriggers-bdpwn-user.kubeconfig" Jul 9 19:21:55.831: INFO: The user is now "e2e-test-imagechangetriggers-bdpwn-user" Jul 9 19:21:55.831: INFO: Creating project "e2e-test-imagechangetriggers-bdpwn" Jul 9 19:21:56.015: INFO: Waiting on permissions in project "e2e-test-imagechangetriggers-bdpwn" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:25 Jul 9 19:21:56.062: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:29 STEP: waiting for builder service account [It] imagechangetriggers should trigger builds of all types [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:42 Jul 9 19:21:56.207: INFO: Running 'oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-imagechangetriggers-bdpwn -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-imagechangetriggers.yaml' Jul 9 19:21:57.056: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-imagechangetriggers-bdpwn -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-imagechangetriggers.yaml] [] imagestream.image.openshift.io "nodejs-ex" created buildconfig.build.openshift.io "bc-source" created buildconfig.build.openshift.io "bc-docker" created buildconfig.build.openshift.io "bc-custom" created Error from server: Jenkins pipeline template openshift/jenkins-ephemeral not found imagestream.image.openshift.io "nodejs-ex" created buildconfig.build.openshift.io "bc-source" created buildconfig.build.openshift.io "bc-docker" created buildconfig.build.openshift.io "bc-custom" created Error from server: Jenkins pipeline template openshift/jenkins-ephemeral not found [] 0xc42157e6c0 exit status 1 true [0xc42113e058 0xc42113e080 0xc42113e080] [0xc42113e058 0xc42113e080] [0xc42113e060 0xc42113e078] [0x916090 0x916190] 0xc4210f51a0 }: imagestream.image.openshift.io "nodejs-ex" created buildconfig.build.openshift.io "bc-source" created buildconfig.build.openshift.io "bc-docker" created buildconfig.build.openshift.io "bc-custom" created Error from server: Jenkins pipeline template openshift/jenkins-ephemeral not found imagestream.image.openshift.io "nodejs-ex" created buildconfig.build.openshift.io "bc-source" created buildconfig.build.openshift.io "bc-docker" created buildconfig.build.openshift.io "bc-custom" created Error from server: Jenkins pipeline template openshift/jenkins-ephemeral not found [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:35 Jul 9 19:21:57.091: INFO: Dumping pod state for namespace e2e-test-imagechangetriggers-bdpwn Jul 9 19:21:57.091: INFO: Running 'oc get --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-imagechangetriggers-bdpwn pods -o yaml' Jul 9 19:21:57.369: INFO: apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" selfLink: "" [AfterEach] [Feature:Builds][Conformance] imagechangetriggers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:21:57.477: INFO: namespace : e2e-test-imagechangetriggers-bdpwn api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] imagechangetriggers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:22:03.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [9.575 seconds] [Feature:Builds][Conformance] imagechangetriggers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:16 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:24 imagechangetriggers should trigger builds of all types [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:42 Expected error: <*util.ExitError | 0xc4220cff20>: { Cmd: "oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-imagechangetriggers-bdpwn -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/test-imagechangetriggers.yaml", StdErr: "imagestream.image.openshift.io \"nodejs-ex\" created\nbuildconfig.build.openshift.io \"bc-source\" created\nbuildconfig.build.openshift.io \"bc-docker\" created\nbuildconfig.build.openshift.io \"bc-custom\" created\nError from server: Jenkins pipeline template openshift/jenkins-ephemeral not found", ExitError: { ProcessState: { pid: 16527, status: 256, rusage: { Utime: {Sec: 0, Usec: 136000}, Stime: {Sec: 0, Usec: 4000}, Maxrss: 97516, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 6969, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 0, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 736, Nivcsw: 58, }, }, Stderr: nil, }, } exit status 1 not to have occurred /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:44 ------------------------------ SS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:22:03.610: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:22:03.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:22:03.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should allow communication from non-default to default namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:53 Jul 9 19:22:03.610: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:56.347: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:21:58.562: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-tmdcm STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating secret with name secret-test-0618e1eb-83e8-11e8-8401-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:21:59.352: INFO: Waiting up to 5m0s for pod "pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-secrets-tmdcm" to be "success or failure" Jul 9 19:21:59.388: INFO: Pod "pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 35.321902ms Jul 9 19:22:01.417: INFO: Pod "pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065057597s STEP: Saw pod success Jul 9 19:22:01.417: INFO: Pod "pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:22:01.455: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276 container secret-volume-test: STEP: delete the pod Jul 9 19:22:01.536: INFO: Waiting for pod pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276 to disappear Jul 9 19:22:01.575: INFO: Pod pod-secrets-061de3b2-83e8-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:22:01.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tmdcm" for this suite. Jul 9 19:22:07.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:22:10.011: INFO: namespace: e2e-tests-secrets-tmdcm, resource: bindings, ignored listing per whitelist Jul 9 19:22:11.419: INFO: namespace e2e-tests-secrets-tmdcm deletion completed in 9.808353935s • [SLOW TEST:15.072 seconds] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified volume on tmpfs should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:22:03.613: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:22:05.347: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-qbpr9 STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 9 19:22:06.225: INFO: Waiting up to 5m0s for pod "pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276" in namespace "e2e-tests-emptydir-qbpr9" to be "success or failure" Jul 9 19:22:06.258: INFO: Pod "pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.066381ms Jul 9 19:22:08.296: INFO: Pod "pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.071268095s STEP: Saw pod success Jul 9 19:22:08.296: INFO: Pod "pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:22:08.333: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:22:08.409: INFO: Waiting for pod pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276 to disappear Jul 9 19:22:08.439: INFO: Pod pod-0a34e8ac-83e8-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:22:08.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qbpr9" for this suite. Jul 9 19:22:14.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:22:16.785: INFO: namespace: e2e-tests-emptydir-qbpr9, resource: bindings, ignored listing per whitelist Jul 9 19:22:18.670: INFO: namespace e2e-tests-emptydir-qbpr9 deletion completed in 10.197573968s • [SLOW TEST:15.057 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 volume on tmpfs should have the correct mode using FSGroup [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65 ------------------------------ [Feature:Prometheus][Feature:Builds] Prometheus when installed to the cluster should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:36 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Prometheus][Feature:Builds] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:13:52.928: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Prometheus][Feature:Builds] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:13:55.279: INFO: configPath is now "/tmp/e2e-test-prometheus-4fqst-user.kubeconfig" Jul 9 19:13:55.279: INFO: The user is now "e2e-test-prometheus-4fqst-user" Jul 9 19:13:55.279: INFO: Creating project "e2e-test-prometheus-4fqst" Jul 9 19:13:55.406: INFO: Waiting on permissions in project "e2e-test-prometheus-4fqst" ... [BeforeEach] [Feature:Prometheus][Feature:Builds] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:31 [It] should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:36 Jul 9 19:14:15.291: INFO: Creating new exec pod STEP: verifying the oauth-proxy reports a 403 on the root URL Jul 9 19:14:17.440: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -k -s -o /dev/null -w '%{http_code}' "https://prometheus.kube-system.svc:443"' Jul 9 19:14:18.221: INFO: stderr: "" STEP: verifying a service account token is able to authenticate Jul 9 19:14:18.221: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -k -s -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' -o /dev/null -w '%{http_code}' "https://prometheus.kube-system.svc:443/graph"' Jul 9 19:14:18.987: INFO: stderr: "" STEP: waiting for builder service account STEP: calling oc new-app /tmp/fixture-testdata-dir574852015/examples/jenkins/application-template.json Jul 9 19:14:19.193: INFO: Running 'oc new-app --config=/tmp/e2e-test-prometheus-4fqst-user.kubeconfig --namespace=e2e-test-prometheus-4fqst /tmp/fixture-testdata-dir574852015/examples/jenkins/application-template.json' --> Deploying template "e2e-test-prometheus-4fqst/nodejs-helloworld-sample" for "/tmp/fixture-testdata-dir574852015/examples/jenkins/application-template.json" to project e2e-test-prometheus-4fqst nodejs-helloworld-sample --------- This example shows how to create a simple nodejs application in openshift origin v3 * With parameters: * Memory Limit=512Mi * Namespace=openshift * Administrator Username=adminFGD # generated * Administrator Password=D1026mG7 # generated --> Creating resources ... service "frontend-prod" created route "frontend" created deploymentconfig "frontend-prod" created service "frontend" created imagestream "origin-nodejs-sample" created imagestream "origin-nodejs-sample2" created imagestream "origin-nodejs-sample3" created imagestream "nodejs-010-centos7" created buildconfig "frontend" created deploymentconfig "frontend" created --> Success Access your application via route 'frontend-e2e-test-prometheus-4fqst.yifan-test-cluster.coreservices.team.coreos.systems' Use 'oc start-build frontend' to start a build. Run 'oc status' to view your app. STEP: wait on imagestreams used by build Jul 9 19:14:20.197: INFO: Running scan #0 Jul 9 19:14:20.197: INFO: Checking language ruby Jul 9 19:14:20.231: INFO: Checking tag 2.0 Jul 9 19:14:20.231: INFO: Checking tag 2.2 Jul 9 19:14:20.231: INFO: Checking tag 2.3 Jul 9 19:14:20.231: INFO: Checking tag 2.4 Jul 9 19:14:20.231: INFO: Checking tag 2.5 Jul 9 19:14:20.231: INFO: Checking tag latest Jul 9 19:14:20.231: INFO: Checking language nodejs Jul 9 19:14:20.263: INFO: Checking tag 6 Jul 9 19:14:20.263: INFO: Checking tag 8 Jul 9 19:14:20.263: INFO: Checking tag latest Jul 9 19:14:20.263: INFO: Checking tag 0.10 Jul 9 19:14:20.263: INFO: Checking tag 4 Jul 9 19:14:20.263: INFO: Checking language perl Jul 9 19:14:20.298: INFO: Checking tag 5.24 Jul 9 19:14:20.299: INFO: Checking tag latest Jul 9 19:14:20.299: INFO: Checking tag 5.16 Jul 9 19:14:20.299: INFO: Checking tag 5.20 Jul 9 19:14:20.299: INFO: Checking language php Jul 9 19:14:20.345: INFO: Checking tag 5.5 Jul 9 19:14:20.345: INFO: Checking tag 5.6 Jul 9 19:14:20.345: INFO: Checking tag 7.0 Jul 9 19:14:20.345: INFO: Checking tag 7.1 Jul 9 19:14:20.345: INFO: Checking tag latest Jul 9 19:14:20.345: INFO: Checking language python Jul 9 19:14:20.384: INFO: Checking tag 3.6 Jul 9 19:14:20.384: INFO: Checking tag latest Jul 9 19:14:20.384: INFO: Checking tag 2.7 Jul 9 19:14:20.384: INFO: Checking tag 3.3 Jul 9 19:14:20.384: INFO: Checking tag 3.4 Jul 9 19:14:20.384: INFO: Checking tag 3.5 Jul 9 19:14:20.384: INFO: Checking language wildfly Jul 9 19:14:20.421: INFO: Checking tag 12.0 Jul 9 19:14:20.421: INFO: Checking tag 8.1 Jul 9 19:14:20.421: INFO: Checking tag 9.0 Jul 9 19:14:20.421: INFO: Checking tag latest Jul 9 19:14:20.421: INFO: Checking tag 10.0 Jul 9 19:14:20.421: INFO: Checking tag 10.1 Jul 9 19:14:20.421: INFO: Checking tag 11.0 Jul 9 19:14:20.421: INFO: Checking language mysql Jul 9 19:14:20.457: INFO: Checking tag 5.5 Jul 9 19:14:20.457: INFO: Checking tag 5.6 Jul 9 19:14:20.457: INFO: Checking tag 5.7 Jul 9 19:14:20.457: INFO: Checking tag latest Jul 9 19:14:20.457: INFO: Checking language postgresql Jul 9 19:14:20.499: INFO: Checking tag 9.2 Jul 9 19:14:20.499: INFO: Checking tag 9.4 Jul 9 19:14:20.499: INFO: Checking tag 9.5 Jul 9 19:14:20.499: INFO: Checking tag 9.6 Jul 9 19:14:20.499: INFO: Checking tag latest Jul 9 19:14:20.499: INFO: Checking language mongodb Jul 9 19:14:20.536: INFO: Checking tag 2.4 Jul 9 19:14:20.536: INFO: Checking tag 2.6 Jul 9 19:14:20.536: INFO: Checking tag 3.2 Jul 9 19:14:20.536: INFO: Checking tag 3.4 Jul 9 19:14:20.536: INFO: Checking tag latest Jul 9 19:14:20.536: INFO: Checking language jenkins Jul 9 19:14:20.572: INFO: Checking tag 1 Jul 9 19:14:20.572: INFO: Checking tag 2 Jul 9 19:14:20.572: INFO: Checking tag latest Jul 9 19:14:20.572: INFO: Success! STEP: explicitly set up image stream tag, avoid timing window Jul 9 19:14:20.572: INFO: Running 'oc tag --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-prometheus-4fqst openshift/nodejs:latest e2e-test-prometheus-4fqst/nodejs-010-centos7:latest' Tag nodejs-010-centos7:latest set to openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653. STEP: start build Jul 9 19:14:20.995: INFO: Running 'oc start-build --config=/tmp/e2e-test-prometheus-4fqst-user.kubeconfig --namespace=e2e-test-prometheus-4fqst frontend -o=name' Jul 9 19:14:21.289: INFO: start-build output with args [frontend -o=name]: Error> StdOut> build/frontend-1 StdErr> STEP: verifying build completed successfully Jul 9 19:14:21.289: INFO: Waiting for frontend-1 to complete Jul 9 19:14:52.358: INFO: Done waiting for frontend-1: util.BuildResult{BuildPath:"build/frontend-1", BuildName:"frontend-1", StartBuildStdErr:"", StartBuildStdOut:"build/frontend-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421143b00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4213a0f00)} with error: STEP: verifying a service account token is able to query terminal build metrics from the Prometheus API STEP: perform prometheus metric query openshift_build_total Jul 9 19:14:52.358: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:14:53.120: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:14:54.120: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:14:54.934: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:14:55.934: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:14:56.718: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:14:57.719: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:14:58.476: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:14:59.476: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:00.218: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:01.218: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:02.077: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:03.078: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:03.776: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:04.777: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:05.641: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:06.642: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:07.364: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:08.364: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:09.095: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:10.095: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:10.990: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:11.991: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:12.773: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:13.773: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:14.625: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:15.625: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:16.414: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:17.418: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:18.229: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:19.229: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:20.069: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:21.069: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:21.826: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:22.826: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:23.812: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:24.812: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:25.761: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:26.762: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:28.055: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:29.055: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:30.219: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:31.220: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:32.062: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:33.062: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:33.842: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:34.842: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:35.569: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:36.569: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:37.322: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:38.323: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:39.067: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:40.067: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:40.882: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:41.883: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:42.809: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:43.809: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:44.699: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:45.699: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:46.455: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:47.456: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:48.256: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:49.256: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:50.301: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:51.302: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:52.189: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:53.190: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:53.968: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:54.968: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:55.783: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:56.783: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:57.758: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:15:58.759: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:15:59.653: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:00.653: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:01.398: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:02.398: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:03.192: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:04.192: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:04.957: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:05.958: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:06.765: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:07.766: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:08.582: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:09.583: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:10.604: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:11.604: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:12.413: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:13.414: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:14.499: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:15.499: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:16.253: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:17.253: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:18.003: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:19.003: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:19.824: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:20.824: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:21.662: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:22.662: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:23.635: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:24.635: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:25.388: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:26.388: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:27.262: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:28.262: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:29.065: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:30.065: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:30.786: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:31.786: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:32.580: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:33.580: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:34.487: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:35.488: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:36.475: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:37.476: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:38.330: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:39.331: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:40.115: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:41.116: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:41.920: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:42.921: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:43.671: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:44.671: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:45.433: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:46.433: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:47.207: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:48.208: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:48.930: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:49.930: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:50.801: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:51.801: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:52.660: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:53.660: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:54.407: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:55.408: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:56.151: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:57.152: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:57.934: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:16:58.934: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:16:59.832: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:00.833: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:01.884: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:02.884: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:03.638: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:04.639: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:05.506: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:06.507: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:07.567: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:08.567: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:09.389: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:10.389: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:11.096: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:12.096: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:12.952: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:13.952: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:14.779: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:15.779: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:16.698: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:17.699: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:18.519: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:19.520: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:20.606: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:21.606: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:22.449: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:23.450: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:24.292: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:25.293: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:26.133: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:27.137: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:27.988: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:28.989: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:29.848: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:30.849: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:31.683: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:32.683: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:33.579: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:34.579: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:35.358: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:36.358: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:37.144: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:38.144: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:39.002: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:40.003: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:40.873: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:41.874: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:42.711: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:43.711: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:44.468: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:45.468: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:46.294: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:47.294: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:48.086: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:49.086: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:50.019: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:51.020: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:51.842: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:52.843: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:53.662: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:54.662: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:55.517: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:56.518: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:57.578: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:17:58.578: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:17:59.375: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:00.375: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:01.116: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:02.117: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:03.078: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:04.078: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:04.929: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:05.929: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:08.085: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:09.085: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:09.902: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:10.902: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:11.882: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:12.886: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:13.694: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:14.694: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:15.510: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:16.512: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:17.332: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:18.332: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:19.247: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:20.247: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:21.057: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:22.057: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:22.942: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:23.942: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:24.755: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:25.755: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:26.546: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:27.546: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:28.307: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:29.308: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:30.114: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:31.114: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:32.041: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:33.042: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:33.884: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:34.889: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:35.622: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:36.622: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:37.473: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:38.474: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:39.325: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:40.325: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:41.117: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:42.117: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:42.990: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:43.991: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:44.853: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:45.853: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:46.600: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:47.600: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:48.351: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:49.351: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:50.172: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:51.172: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:52.028: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:53.029: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:53.841: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:54.841: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:55.782: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:56.783: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:57.560: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:18:58.561: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:18:59.324: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:00.324: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:01.104: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:02.104: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:03.051: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:04.051: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:04.770: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:05.770: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:06.554: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:07.554: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:08.291: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:09.291: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:10.037: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:11.037: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:11.790: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:12.790: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:13.583: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:14.583: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:15.290: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:16.290: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:17.004: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:18.004: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:18.843: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:19.846: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:20.896: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:21.896: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:23.604: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:24.604: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:25.462: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:26.462: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:27.363: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:28.364: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:29.249: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:30.249: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:31.091: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:32.091: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:33.344: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:34.345: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:35.183: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:36.184: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:36.893: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:37.893: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:38.669: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:39.669: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:40.506: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:41.506: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:42.365: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:43.365: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:44.216: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:45.216: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:46.066: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:47.067: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:47.827: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:48.828: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:49.673: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:50.673: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:51.572: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:52.573: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:53.549: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:54.549: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:55.397: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:56.398: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:57.587: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:19:58.587: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:19:59.446: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:00.447: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:01.286: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:02.286: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:03.122: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:04.123: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:04.862: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:05.863: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:06.662: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:07.662: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:08.480: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:09.480: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:10.231: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:11.231: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:12.034: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:13.035: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:13.866: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:14.866: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:15.822: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:16.823: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:17.545: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:18.546: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:19.349: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:20.349: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:21.109: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:22.109: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:22.925: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:23.925: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:24.723: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:25.724: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:26.573: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:27.574: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:28.435: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:29.435: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:30.280: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:31.281: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:32.024: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:33.024: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:33.929: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:34.929: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:35.769: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:36.769: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:37.695: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:38.695: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:39.446: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:40.446: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:41.259: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:42.260: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:43.036: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:44.036: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:44.915: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:45.916: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:46.824: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:47.824: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:48.999: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:49.999: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:50.803: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:51.804: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:52.554: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:53.555: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:54.384: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:55.384: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:56.236: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:57.237: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:58.078: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:20:59.079: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:20:59.938: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:00.939: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:01.790: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:02.791: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:03.605: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:04.605: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:05.455: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:06.455: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:07.289: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:08.290: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:09.286: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:10.286: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:11.084: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:12.084: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:12.878: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:13.878: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:14.667: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:15.667: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:16.444: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:17.444: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:18.205: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:19.205: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:20.401: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:21.401: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:22.261: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:23.261: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:24.373: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:25.374: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:26.466: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:27.467: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:28.319: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:29.319: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:30.264: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:31.264: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:32.126: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:33.127: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:34.063: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:35.064: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:35.854: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:36.854: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:37.609: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:38.609: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:39.369: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:40.369: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:41.195: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:42.195: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:42.971: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:43.971: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:44.823: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:45.823: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:46.609: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:47.609: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:48.609: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:49.609: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:50.382: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:51.382: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:52.180: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:53.181: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:54.010: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:55.010: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:55.913: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:56.913: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:57.832: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:21:58.832: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:21:59.685: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:00.686: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:01.767: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:02.767: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:03.609: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:04.609: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:05.397: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:06.398: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:07.362: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:08.362: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:09.281: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:10.281: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:11.158: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:12.159: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:12.926: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:13.927: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:14.771: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}STEP: perform prometheus metric query openshift_build_total Jul 9 19:22:15.772: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-4fqst execpod98j4h -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/query?query=openshift_build_total"' Jul 9 19:22:16.656: INFO: stderr: "" query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{"phase":"Complete"}, greaterThanEqual:true, value:0, success:false}} had results {"status":"success","data":{"resultType":"vector","result":[]}}[AfterEach] [Feature:Prometheus][Feature:Builds] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:22:17.895: INFO: namespace : e2e-test-prometheus-4fqst api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Prometheus][Feature:Builds] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:22:29.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [517.111 seconds] [Feature:Prometheus][Feature:Builds] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:25 when installed to the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:35 should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:36 Expected : { "openshift_build_total": { s: "query openshift_build_total for tests []prometheus.metricTest{prometheus.metricTest{labels:map[string]string{\"phase\":\"Complete\"}, greaterThanEqual:true, value:0, success:false}} had results {\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}", }, } to be empty /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:160 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs when run iteratively [Conformance] should only deploy the last deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:106 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:16.710: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:21:18.653: INFO: configPath is now "/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig" Jul 9 19:21:18.653: INFO: The user is now "e2e-test-cli-deployment-g9nj9-user" Jul 9 19:21:18.653: INFO: Creating project "e2e-test-cli-deployment-g9nj9" Jul 9 19:21:18.767: INFO: Waiting on permissions in project "e2e-test-cli-deployment-g9nj9" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should only deploy the last deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:106 Jul 9 19:21:18.898: INFO: 00: cancelling deployment Jul 9 19:21:18.898: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 cancel dc/deployment-simple' Jul 9 19:21:19.200: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 cancel dc/deployment-simple] [] error: there have been no replication controllers for e2e-test-cli-deployment-g9nj9/deployment-simple error: there have been no replication controllers for e2e-test-cli-deployment-g9nj9/deployment-simple [] 0xc421e599b0 exit status 1 true [0xc420623620 0xc420623650 0xc420623650] [0xc420623620 0xc420623650] [0xc420623628 0xc420623648] [0x916090 0x916190] 0xc42051ecc0 }: error: there have been no replication controllers for e2e-test-cli-deployment-g9nj9/deployment-simple Jul 9 19:21:19.200: INFO: rollout cancel deployment failed due to known safe error: exit status 1 Jul 9 19:21:19.200: INFO: 01: triggering a new deployment with config change Jul 9 19:21:19.200: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 dc/deployment-simple A=1' Jul 9 19:21:19.574: INFO: 02: triggering a new deployment with config change Jul 9 19:21:19.574: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 dc/deployment-simple A=2' Jul 9 19:21:20.620: INFO: 03: cancelling deployment Jul 9 19:21:20.620: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 cancel dc/deployment-simple' Jul 9 19:21:20.918: INFO: 04: waiting for current deployment to start running Jul 9 19:21:25.499: INFO: 05: cancelling deployment Jul 9 19:21:25.499: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 cancel dc/deployment-simple' Jul 9 19:21:25.930: INFO: 06: triggering a new deployment with config change Jul 9 19:21:25.930: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 dc/deployment-simple A=6' Jul 9 19:21:26.362: INFO: 07: cancelling deployment Jul 9 19:21:26.362: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 cancel dc/deployment-simple' Jul 9 19:21:26.719: INFO: 08: triggering a new deployment with config change Jul 9 19:21:26.719: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 dc/deployment-simple A=8' Jul 9 19:21:27.054: INFO: 09: cancelling deployment Jul 9 19:21:27.054: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 cancel dc/deployment-simple' Jul 9 19:21:27.669: INFO: 10: waiting for current deployment to start running Jul 9 19:21:34.527: INFO: 11: cancelling deployment Jul 9 19:21:34.527: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 cancel dc/deployment-simple' Jul 9 19:21:35.013: INFO: 12: cancelling deployment Jul 9 19:21:35.013: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 cancel dc/deployment-simple' Jul 9 19:21:35.334: INFO: 13: waiting for current deployment to start running Jul 9 19:21:35.447: INFO: 14: triggering a new deployment with config change Jul 9 19:21:35.447: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 dc/deployment-simple A=14' Jul 9 19:21:35.762: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-g9nj9-user.kubeconfig --namespace=e2e-test-cli-deployment-g9nj9 dc/deployment-simple A=15' STEP: verifying all but terminal deployment is marked complete Jul 9 19:21:51.844: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-4) is complete. [AfterEach] when run iteratively [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:102 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:21:53.924: INFO: namespace : e2e-test-cli-deployment-g9nj9 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:22:33.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:77.293 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 when run iteratively [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:100 should only deploy the last deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:106 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:22:30.051: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:22:32.469: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-containers-6pblr STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test override arguments Jul 9 19:22:33.382: INFO: Waiting up to 5m0s for pod "client-containers-1a654b2d-83e8-11e8-881a-28d244b00276" in namespace "e2e-tests-containers-6pblr" to be "success or failure" Jul 9 19:22:33.438: INFO: Pod "client-containers-1a654b2d-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 55.543788ms Jul 9 19:22:35.484: INFO: Pod "client-containers-1a654b2d-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102376645s Jul 9 19:22:37.530: INFO: Pod "client-containers-1a654b2d-83e8-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148141897s STEP: Saw pod success Jul 9 19:22:37.530: INFO: Pod "client-containers-1a654b2d-83e8-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:22:37.574: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod client-containers-1a654b2d-83e8-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:22:37.731: INFO: Waiting for pod client-containers-1a654b2d-83e8-11e8-881a-28d244b00276 to disappear Jul 9 19:22:37.771: INFO: Pod client-containers-1a654b2d-83e8-11e8-881a-28d244b00276 no longer exists [AfterEach] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:22:37.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-6pblr" for this suite. Jul 9 19:22:43.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:22:47.792: INFO: namespace: e2e-tests-containers-6pblr, resource: bindings, ignored listing per whitelist Jul 9 19:22:48.649: INFO: namespace e2e-tests-containers-6pblr deletion completed in 10.799881453s • [SLOW TEST:18.598 seconds] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should be able to override the image's default arguments (docker cmd) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:22:18.673: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:22:20.558: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-g8dnw STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating the pod Jul 9 19:22:28.373: INFO: Successfully updated pod "annotationupdate1346fc4d-83e8-11e8-8fe2-28d244b00276" [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:22:30.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g8dnw" for this suite. Jul 9 19:22:52.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:22:55.049: INFO: namespace: e2e-tests-projected-g8dnw, resource: bindings, ignored listing per whitelist Jul 9 19:22:56.448: INFO: namespace e2e-tests-projected-g8dnw deletion completed in 25.95855248s • [SLOW TEST:37.775 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [k8s.io] Pods should get a host IP [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:22:48.651: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:22:50.682: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-cn6vz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should get a host IP [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: creating pod Jul 9 19:22:53.809: INFO: Pod pod-hostip-253ea53f-83e8-11e8-881a-28d244b00276 has hostIP: 10.0.130.54 [AfterEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:22:53.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-cn6vz" for this suite. Jul 9 19:23:15.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:23:19.541: INFO: namespace: e2e-tests-pods-cn6vz, resource: bindings, ignored listing per whitelist Jul 9 19:23:20.908: INFO: namespace e2e-tests-pods-cn6vz deletion completed in 27.052836893s • [SLOW TEST:32.257 seconds] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should get a host IP [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Feature:Builds][pruning] prune builds based on settings in the buildconfig should prune canceled builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:153 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:21:50.911: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:21:52.597: INFO: configPath is now "/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig" Jul 9 19:21:52.597: INFO: The user is now "e2e-test-build-pruning-6gd7w-user" Jul 9 19:21:52.597: INFO: Creating project "e2e-test-build-pruning-6gd7w" Jul 9 19:21:52.732: INFO: Waiting on permissions in project "e2e-test-build-pruning-6gd7w" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:37 Jul 9 19:21:52.826: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:41 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:21:52.964: INFO: Running scan #0 Jul 9 19:21:52.964: INFO: Checking language ruby Jul 9 19:21:53.003: INFO: Checking tag 2.4 Jul 9 19:21:53.003: INFO: Checking tag 2.5 Jul 9 19:21:53.003: INFO: Checking tag latest Jul 9 19:21:53.003: INFO: Checking tag 2.0 Jul 9 19:21:53.003: INFO: Checking tag 2.2 Jul 9 19:21:53.003: INFO: Checking tag 2.3 Jul 9 19:21:53.003: INFO: Checking language nodejs Jul 9 19:21:53.069: INFO: Checking tag 8 Jul 9 19:21:53.069: INFO: Checking tag latest Jul 9 19:21:53.069: INFO: Checking tag 0.10 Jul 9 19:21:53.069: INFO: Checking tag 4 Jul 9 19:21:53.069: INFO: Checking tag 6 Jul 9 19:21:53.069: INFO: Checking language perl Jul 9 19:21:53.150: INFO: Checking tag 5.20 Jul 9 19:21:53.150: INFO: Checking tag 5.24 Jul 9 19:21:53.150: INFO: Checking tag latest Jul 9 19:21:53.150: INFO: Checking tag 5.16 Jul 9 19:21:53.150: INFO: Checking language php Jul 9 19:21:53.192: INFO: Checking tag 7.1 Jul 9 19:21:53.192: INFO: Checking tag latest Jul 9 19:21:53.192: INFO: Checking tag 5.5 Jul 9 19:21:53.192: INFO: Checking tag 5.6 Jul 9 19:21:53.192: INFO: Checking tag 7.0 Jul 9 19:21:53.192: INFO: Checking language python Jul 9 19:21:53.240: INFO: Checking tag 3.3 Jul 9 19:21:53.240: INFO: Checking tag 3.4 Jul 9 19:21:53.240: INFO: Checking tag 3.5 Jul 9 19:21:53.240: INFO: Checking tag 3.6 Jul 9 19:21:53.240: INFO: Checking tag latest Jul 9 19:21:53.240: INFO: Checking tag 2.7 Jul 9 19:21:53.240: INFO: Checking language wildfly Jul 9 19:21:53.282: INFO: Checking tag latest Jul 9 19:21:53.282: INFO: Checking tag 10.0 Jul 9 19:21:53.282: INFO: Checking tag 10.1 Jul 9 19:21:53.282: INFO: Checking tag 11.0 Jul 9 19:21:53.282: INFO: Checking tag 12.0 Jul 9 19:21:53.282: INFO: Checking tag 8.1 Jul 9 19:21:53.282: INFO: Checking tag 9.0 Jul 9 19:21:53.282: INFO: Checking language mysql Jul 9 19:21:53.325: INFO: Checking tag 5.5 Jul 9 19:21:53.325: INFO: Checking tag 5.6 Jul 9 19:21:53.325: INFO: Checking tag 5.7 Jul 9 19:21:53.325: INFO: Checking tag latest Jul 9 19:21:53.325: INFO: Checking language postgresql Jul 9 19:21:53.441: INFO: Checking tag 9.4 Jul 9 19:21:53.441: INFO: Checking tag 9.5 Jul 9 19:21:53.441: INFO: Checking tag 9.6 Jul 9 19:21:53.441: INFO: Checking tag latest Jul 9 19:21:53.441: INFO: Checking tag 9.2 Jul 9 19:21:53.441: INFO: Checking language mongodb Jul 9 19:21:53.480: INFO: Checking tag 2.4 Jul 9 19:21:53.480: INFO: Checking tag 2.6 Jul 9 19:21:53.480: INFO: Checking tag 3.2 Jul 9 19:21:53.480: INFO: Checking tag 3.4 Jul 9 19:21:53.480: INFO: Checking tag latest Jul 9 19:21:53.480: INFO: Checking language jenkins Jul 9 19:21:53.520: INFO: Checking tag 1 Jul 9 19:21:53.520: INFO: Checking tag 2 Jul 9 19:21:53.520: INFO: Checking tag latest Jul 9 19:21:53.520: INFO: Success! STEP: creating test image stream Jul 9 19:21:53.520: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-pruning/imagestream.yaml' imagestream.image.openshift.io "myphp" created [It] should prune canceled builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:153 STEP: creating test successful build config Jul 9 19:21:53.807: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-pruning/failed-build-config.yaml' buildconfig.build.openshift.io "myphp" created STEP: starting and canceling three test builds Jul 9 19:21:54.110: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w myphp' Jul 9 19:21:54.364: INFO: start-build output with args [myphp]: Error> StdOut> build "myphp-1" started StdErr> Jul 9 19:21:54.364: INFO: Running 'oc cancel-build --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w myphp-1' build "myphp-1" cancelled Jul 9 19:21:55.749: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w myphp' Jul 9 19:21:56.071: INFO: start-build output with args [myphp]: Error> StdOut> build "myphp-2" started StdErr> Jul 9 19:21:56.071: INFO: Running 'oc cancel-build --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w myphp-2' Jul 9 19:22:27.661: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc cancel-build --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w myphp-2] [] error: build e2e-test-build-pruning-6gd7w/myphp-2 failed to cancel: timed out waiting for the condition error: failure during the build cancellation error: build e2e-test-build-pruning-6gd7w/myphp-2 failed to cancel: timed out waiting for the condition error: failure during the build cancellation [] 0xc421ae2a50 exit status 1 true [0xc42175c050 0xc42175c078 0xc42175c078] [0xc42175c050 0xc42175c078] [0xc42175c058 0xc42175c070] [0x916090 0x916190] 0xc42131c720 }: error: build e2e-test-build-pruning-6gd7w/myphp-2 failed to cancel: timed out waiting for the condition error: failure during the build cancellation error: build e2e-test-build-pruning-6gd7w/myphp-2 failed to cancel: timed out waiting for the condition error: failure during the build cancellation Jul 9 19:22:27.697: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w myphp' Jul 9 19:22:28.054: INFO: start-build output with args [myphp]: Error> StdOut> build "myphp-3" started StdErr> Jul 9 19:22:28.054: INFO: Running 'oc cancel-build --config=/tmp/e2e-test-build-pruning-6gd7w-user.kubeconfig --namespace=e2e-test-build-pruning-6gd7w myphp-3' build "myphp-3" cancelled STEP: waiting up to one minute for pruning to complete timed out waiting for the condition[AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:56 [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:23:29.733: INFO: namespace : e2e-test-build-pruning-6gd7w api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:23:35.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:104.899 seconds] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:21 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:35 should prune canceled builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:153 ------------------------------ S ------------------------------ [Feature:Builds] forcePull should affect pulling builder images ForcePull test case execution s2i [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:105 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:22:34.003: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:22:36.126: INFO: configPath is now "/tmp/e2e-test-forcepull-cs6v7-user.kubeconfig" Jul 9 19:22:36.126: INFO: The user is now "e2e-test-forcepull-cs6v7-user" Jul 9 19:22:36.126: INFO: Creating project "e2e-test-forcepull-cs6v7" Jul 9 19:22:36.320: INFO: Waiting on permissions in project "e2e-test-forcepull-cs6v7" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:68 Jul 9 19:22:36.370: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false STEP: granting system:build-strategy-custom Jul 9 19:22:36.370: INFO: Running 'oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-forcepull-cs6v7 clusterrolebinding custombuildaccess-e2e-test-forcepull-cs6v7-user --clusterrole system:build-strategy-custom --user e2e-test-forcepull-cs6v7-user' clusterrolebinding.rbac.authorization.k8s.io "custombuildaccess-e2e-test-forcepull-cs6v7-user" created STEP: waiting for openshift/ruby:latest ImageStreamTag STEP: waiting for an is importer to import a tag latest into a stream ruby STEP: create application build configs for 3 strategies Jul 9 19:22:36.690: INFO: Running 'oc create --config=/tmp/e2e-test-forcepull-cs6v7-user.kubeconfig --namespace=e2e-test-forcepull-cs6v7 -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/forcepull-test.json' buildconfig.build.openshift.io "ruby-sample-build-tc" created buildconfig.build.openshift.io "ruby-sample-build-td" created buildconfig.build.openshift.io "ruby-sample-build-ts" created [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:99 STEP: waiting for builder service account [It] ForcePull test case execution s2i [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:105 STEP: when s2i force pull is true Jul 9 19:22:37.402: INFO: Running 'oc start-build --config=/tmp/e2e-test-forcepull-cs6v7-user.kubeconfig --namespace=e2e-test-forcepull-cs6v7 ruby-sample-build-ts -o=name' Jul 9 19:22:37.733: INFO: start-build output with args [ruby-sample-build-ts -o=name]: Error> StdOut> build/ruby-sample-build-ts-1 StdErr> Jul 9 19:22:37.734: INFO: Waiting for ruby-sample-build-ts-1 to complete Jul 9 19:23:03.928: INFO: Done waiting for ruby-sample-build-ts-1: util.BuildResult{BuildPath:"build/ruby-sample-build-ts-1", BuildName:"ruby-sample-build-ts-1", StartBuildStdErr:"", StartBuildStdOut:"build/ruby-sample-build-ts-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4211a7800), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e2960)} with error: Jul 9 19:23:03.928: INFO: Running 'oc logs --config=/tmp/e2e-test-forcepull-cs6v7-user.kubeconfig --namespace=e2e-test-forcepull-cs6v7 -f build/ruby-sample-build-ts-1 --timestamps' found pull image line 2018-07-10T02:22:43.095574538Z I0710 02:22:43.095521 1 util.go:266] Pulling image "docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933" ... Jul 9 19:23:04.351: INFO: Running 'oc start-build --config=/tmp/e2e-test-forcepull-cs6v7-user.kubeconfig --namespace=e2e-test-forcepull-cs6v7 ruby-sample-build-ts -o=name' Jul 9 19:23:04.671: INFO: start-build output with args [ruby-sample-build-ts -o=name]: Error> StdOut> build/ruby-sample-build-ts-2 StdErr> Jul 9 19:23:04.671: INFO: Waiting for ruby-sample-build-ts-2 to complete Jul 9 19:23:30.742: INFO: Done waiting for ruby-sample-build-ts-2: util.BuildResult{BuildPath:"build/ruby-sample-build-ts-2", BuildName:"ruby-sample-build-ts-2", StartBuildStdErr:"", StartBuildStdOut:"build/ruby-sample-build-ts-2", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421823200), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e2960)} with error: Jul 9 19:23:30.742: INFO: Running 'oc logs --config=/tmp/e2e-test-forcepull-cs6v7-user.kubeconfig --namespace=e2e-test-forcepull-cs6v7 -f build/ruby-sample-build-ts-2 --timestamps' found pull image line 2018-07-10T02:23:09.631982718Z I0710 02:23:09.631930 1 util.go:266] Pulling image "docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933" ... [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:87 Jul 9 19:23:31.095: INFO: Running 'oc delete --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-forcepull-cs6v7 clusterrolebinding custombuildaccess-e2e-test-forcepull-cs6v7-user' clusterrolebinding.rbac.authorization.k8s.io "custombuildaccess-e2e-test-forcepull-cs6v7-user" deleted [AfterEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:23:31.491: INFO: namespace : e2e-test-forcepull-cs6v7 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:23:37.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.566 seconds] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:62 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:66 ForcePull test case execution s2i [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:105 ------------------------------ [Feature:Builds][Conformance] build without output image building from templates should create an image from a S2i template without an output image reference defined [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:51 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:23:20.909: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:23:23.226: INFO: configPath is now "/tmp/e2e-test-build-no-outputname-rnrpr-user.kubeconfig" Jul 9 19:23:23.226: INFO: The user is now "e2e-test-build-no-outputname-rnrpr-user" Jul 9 19:23:23.226: INFO: Creating project "e2e-test-build-no-outputname-rnrpr" Jul 9 19:23:23.372: INFO: Waiting on permissions in project "e2e-test-build-no-outputname-rnrpr" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:22 Jul 9 19:23:23.421: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [It] should create an image from a S2i template without an output image reference defined [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:51 Jul 9 19:23:23.421: INFO: Running 'oc create --config=/tmp/e2e-test-build-no-outputname-rnrpr-user.kubeconfig --namespace=e2e-test-build-no-outputname-rnrpr -f /tmp/fixture-testdata-dir574852015/test/extended/testdata/builds/test-s2i-no-outputname.json' buildconfig.build.openshift.io "test-sti" created STEP: expecting build to pass without an output image reference specified Jul 9 19:23:23.702: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-no-outputname-rnrpr-user.kubeconfig --namespace=e2e-test-build-no-outputname-rnrpr test-sti -o=name' Jul 9 19:23:24.013: INFO: start-build output with args [test-sti -o=name]: Error> StdOut> build/test-sti-1 StdErr> Jul 9 19:23:24.013: INFO: Waiting for test-sti-1 to complete Jul 9 19:23:50.099: INFO: Done waiting for test-sti-1: util.BuildResult{BuildPath:"build/test-sti-1", BuildName:"test-sti-1", StartBuildStdErr:"", StartBuildStdOut:"build/test-sti-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421869200), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004d2c0)} with error: STEP: verifying the build test-sti-1 output Jul 9 19:23:50.099: INFO: Running 'oc logs --config=/tmp/e2e-test-build-no-outputname-rnrpr-user.kubeconfig --namespace=e2e-test-build-no-outputname-rnrpr -f build/test-sti-1 --timestamps' Build log: 2018-07-10T02:23:26.044366726Z I0710 02:23:26.044122 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-sti-1","namespace":"e2e-test-build-no-outputname-rnrpr","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-no-outputname-rnrpr/builds/test-sti-1","uid":"389a007e-83e8-11e8-aa51-0af96768d57e","resourceVersion":"81787","creationTimestamp":"2018-07-10T02:23:24Z","labels":{"buildconfig":"test-sti","name":"test-sti","openshift.io/build-config.name":"test-sti","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test-sti","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test-sti","uid":"38641064-83e8-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/ruby-hello-world"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"centos/ruby-22-centos7"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","config":{"kind":"BuildConfig","namespace":"e2e-test-build-no-outputname-rnrpr","name":"test-sti"},"output":{}}} 2018-07-10T02:23:26.04486426Z Cloning "https://github.com/openshift/ruby-hello-world" ... 2018-07-10T02:23:26.044941235Z I0710 02:23:26.044860 1 source.go:207] git ls-remote --heads https://github.com/openshift/ruby-hello-world 2018-07-10T02:23:26.04495094Z I0710 02:23:26.044886 1 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world 2018-07-10T02:23:26.292408848Z I0710 02:23:26.292246 1 source.go:207] cf1fa898d2a78685ccde72f14b4922b474f73cd1 refs/heads/beta2 2018-07-10T02:23:26.292441286Z 2602ace61490de0513dfbd7c7de949356cf9bd17 refs/heads/beta3 2018-07-10T02:23:26.292449263Z 394e0f7c0446d65d163ecae9cf5b559ad60de6dd refs/heads/beta4 2018-07-10T02:23:26.292454918Z 11e9bbac1dcf5a06df07f5a6ab893a3cb9448011 refs/heads/blog_part1 2018-07-10T02:23:26.292460478Z 5619f11232c0a623f7da419438539335d49acfa3 refs/heads/config 2018-07-10T02:23:26.29246627Z 7ccd3242c49c3868195ca9400a539fa611111096 refs/heads/master 2018-07-10T02:23:26.292472449Z 9f70e0daf56b57d7f3cc012020df06ba7f914d0f refs/heads/revert-64-feature/fix-for-ruby-2.5-compatibility 2018-07-10T02:23:26.292478367Z ffa3f8596f3f82c0ee224f1b1d0c23102b1ad1f1 refs/heads/revert-66-feature/fix-for-ruby-2.5-compatibility-with-ci 2018-07-10T02:23:26.292484509Z d71bdd56df54d7400e1f72dc0929280e43627138 refs/heads/revert-69-gemfile 2018-07-10T02:23:26.292490163Z faccd39c6857edb7a3015cc6837fb347613f23c3 refs/heads/undo 2018-07-10T02:23:26.292495721Z I0710 02:23:26.292275 1 source.go:64] Cloning source from https://github.com/openshift/ruby-hello-world 2018-07-10T02:23:26.29256901Z I0710 02:23:26.292319 1 repository.go:388] Executing git clone --recursive --depth=1 https://github.com/openshift/ruby-hello-world /tmp/build/inputs 2018-07-10T02:23:26.688201862Z I0710 02:23:26.688082 1 repository.go:388] Executing git rev-parse --abbrev-ref HEAD 2018-07-10T02:23:26.689422175Z I0710 02:23:26.689340 1 repository.go:388] Executing git rev-parse --verify HEAD 2018-07-10T02:23:26.690534188Z I0710 02:23:26.690455 1 repository.go:388] Executing git --no-pager show -s --format=%an HEAD 2018-07-10T02:23:26.691935769Z I0710 02:23:26.691862 1 repository.go:388] Executing git --no-pager show -s --format=%ae HEAD 2018-07-10T02:23:26.693357283Z I0710 02:23:26.693276 1 repository.go:388] Executing git --no-pager show -s --format=%cn HEAD 2018-07-10T02:23:26.694773331Z I0710 02:23:26.694703 1 repository.go:388] Executing git --no-pager show -s --format=%ce HEAD 2018-07-10T02:23:26.696157951Z I0710 02:23:26.696090 1 repository.go:388] Executing git --no-pager show -s --format=%ad HEAD 2018-07-10T02:23:26.697616943Z I0710 02:23:26.697537 1 repository.go:388] Executing git --no-pager show -s --format=%<(80,trunc)%s HEAD 2018-07-10T02:23:26.698954152Z I0710 02:23:26.698874 1 repository.go:388] Executing git config --get remote.origin.url 2018-07-10T02:23:26.699990592Z Commit: 7ccd3242c49c3868195ca9400a539fa611111096 (Merge pull request #71 from bparees/gemfile2) 2018-07-10T02:23:26.700004629Z Author: Ben Parees 2018-07-10T02:23:26.700009376Z Date: Fri Feb 9 18:24:07 2018 -0500 2018-07-10T02:23:26.700040446Z I0710 02:23:26.699941 1 repository.go:388] Executing git rev-parse --abbrev-ref HEAD 2018-07-10T02:23:26.701175398Z I0710 02:23:26.701099 1 repository.go:388] Executing git rev-parse --verify HEAD 2018-07-10T02:23:26.702258422Z I0710 02:23:26.702174 1 repository.go:388] Executing git --no-pager show -s --format=%an HEAD 2018-07-10T02:23:26.703592352Z I0710 02:23:26.703503 1 repository.go:388] Executing git --no-pager show -s --format=%ae HEAD 2018-07-10T02:23:26.704942415Z I0710 02:23:26.704859 1 repository.go:388] Executing git --no-pager show -s --format=%cn HEAD 2018-07-10T02:23:26.706280177Z I0710 02:23:26.706189 1 repository.go:388] Executing git --no-pager show -s --format=%ce HEAD 2018-07-10T02:23:26.707551603Z I0710 02:23:26.707476 1 repository.go:388] Executing git --no-pager show -s --format=%ad HEAD 2018-07-10T02:23:26.708966238Z I0710 02:23:26.708883 1 repository.go:388] Executing git --no-pager show -s --format=%<(80,trunc)%s HEAD 2018-07-10T02:23:26.710270298Z I0710 02:23:26.710188 1 repository.go:388] Executing git config --get remote.origin.url 2018-07-10T02:23:27.667152723Z I0710 02:23:27.666794 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-sti-1","namespace":"e2e-test-build-no-outputname-rnrpr","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-no-outputname-rnrpr/builds/test-sti-1","uid":"389a007e-83e8-11e8-aa51-0af96768d57e","resourceVersion":"81787","creationTimestamp":"2018-07-10T02:23:24Z","labels":{"buildconfig":"test-sti","name":"test-sti","openshift.io/build-config.name":"test-sti","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test-sti","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test-sti","uid":"38641064-83e8-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/ruby-hello-world"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"centos/ruby-22-centos7"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","config":{"kind":"BuildConfig","namespace":"e2e-test-build-no-outputname-rnrpr","name":"test-sti"},"output":{}}} 2018-07-10T02:23:27.667451573Z I0710 02:23:27.667357 1 builder.go:289] Checking for presence of a Dockerfile 2018-07-10T02:23:28.558708032Z I0710 02:23:28.558490 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-sti-1","namespace":"e2e-test-build-no-outputname-rnrpr","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-no-outputname-rnrpr/builds/test-sti-1","uid":"389a007e-83e8-11e8-aa51-0af96768d57e","resourceVersion":"81787","creationTimestamp":"2018-07-10T02:23:24Z","labels":{"buildconfig":"test-sti","name":"test-sti","openshift.io/build-config.name":"test-sti","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test-sti","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test-sti","uid":"38641064-83e8-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/ruby-hello-world"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"centos/ruby-22-centos7"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","config":{"kind":"BuildConfig","namespace":"e2e-test-build-no-outputname-rnrpr","name":"test-sti"},"output":{}}} 2018-07-10T02:23:28.559170929Z I0710 02:23:28.559108 1 util_linux.go:96] found cgroup parent /kubepods/besteffort/pod38a3941a-83e8-11e8-84c6-0af96768d57e 2018-07-10T02:23:28.559235887Z I0710 02:23:28.559128 1 builder.go:223] Running build with cgroup limits: api.CGroupLimits{MemoryLimitBytes:92233720368547, CPUShares:0, CPUPeriod:0, CPUQuota:0, MemorySwap:92233720368547, Parent:"/kubepods/besteffort/pod38a3941a-83e8-11e8-84c6-0af96768d57e"} 2018-07-10T02:23:28.559321953Z I0710 02:23:28.559271 1 sti.go:154] Found git source info: git.SourceInfo{SourceInfo:git.SourceInfo{Ref:"master", CommitID:"7ccd3242c49c3868195ca9400a539fa611111096", Date:"Fri Feb 9 18:24:07 2018 -0500", AuthorName:"Ben Parees", AuthorEmail:"bparees@users.noreply.github.com", CommitterName:"GitHub", CommitterEmail:"noreply@github.com", Message:"Merge pull request #71 from bparees/gemfile2", Location:"https://github.com/openshift/ruby-hello-world", ContextDir:""}} 2018-07-10T02:23:28.559540743Z I0710 02:23:28.559485 1 sti.go:192] container type= 2018-07-10T02:23:28.559597421Z I0710 02:23:28.559538 1 builder.go:247] With force pull false, setting policies to if-not-present 2018-07-10T02:23:28.559607369Z I0710 02:23:28.559552 1 builder.go:247] The value of ALLOWED_UIDS is [1-] 2018-07-10T02:23:28.559613876Z I0710 02:23:28.559567 1 builder.go:247] The value of DROP_CAPS is [KILL,MKNOD,SETGID,SETUID] 2018-07-10T02:23:28.559645594Z I0710 02:23:28.559581 1 cfg.go:39] Locating docker auth for image centos/ruby-22-centos7 and type PULL_DOCKERCFG_PATH 2018-07-10T02:23:28.559654602Z I0710 02:23:28.559590 1 cfg.go:49] Getting docker auth in paths : [] 2018-07-10T02:23:28.559660336Z I0710 02:23:28.559617 1 config.go:131] looking for config.json at /config.json 2018-07-10T02:23:28.55969053Z I0710 02:23:28.559655 1 config.go:131] looking for config.json at /config.json 2018-07-10T02:23:28.559699674Z I0710 02:23:28.559668 1 config.go:131] looking for config.json at /root/.docker/config.json 2018-07-10T02:23:28.559773737Z I0710 02:23:28.559686 1 config.go:131] looking for config.json at /.docker/config.json 2018-07-10T02:23:28.559784016Z I0710 02:23:28.559718 1 cfg.go:39] Locating docker auth for image test-sti-1 and type PUSH_DOCKERCFG_PATH 2018-07-10T02:23:28.559790177Z I0710 02:23:28.559727 1 cfg.go:49] Getting docker auth in paths : [] 2018-07-10T02:23:28.559796014Z I0710 02:23:28.559743 1 config.go:131] looking for config.json at /config.json 2018-07-10T02:23:28.559833596Z I0710 02:23:28.559760 1 config.go:131] looking for config.json at /config.json 2018-07-10T02:23:28.559844488Z I0710 02:23:28.559772 1 config.go:131] looking for config.json at /root/.docker/config.json 2018-07-10T02:23:28.559850413Z I0710 02:23:28.559783 1 config.go:131] looking for config.json at /.docker/config.json 2018-07-10T02:23:28.562633383Z I0710 02:23:28.562543 1 docker.go:510] Using locally available image "centos/ruby-22-centos7:latest" 2018-07-10T02:23:28.564867496Z I0710 02:23:28.564770 1 builder.go:247] Creating a new S2I builder with config: "Builder Name:\t\t\tRuby 2.2\nBuilder Image:\t\t\tcentos/ruby-22-centos7\nBuilder Image Version:\t\t\"c159276\"\nSource:\t\t\t\t/tmp/build/inputs\nOutput Image Tag:\t\ttemp.builder.openshift.io/e2e-test-build-no-outputname-rnrpr/test-sti-1:ecd51cb5\nEnvironment:\t\t\tOPENSHIFT_BUILD_NAME=test-sti-1,OPENSHIFT_BUILD_NAMESPACE=e2e-test-build-no-outputname-rnrpr,OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world,OPENSHIFT_BUILD_COMMIT=7ccd3242c49c3868195ca9400a539fa611111096,BUILD_LOGLEVEL=5\nLabels:\t\t\t\tio.openshift.build.commit.date=\"Fri Feb 9 18:24:07 2018 -0500\",io.openshift.build.commit.id=\"7ccd3242c49c3868195ca9400a539fa611111096\",io.openshift.build.commit.ref=\"master\",io.openshift.build.commit.message=\"Merge pull request #71 from bparees/gemfile2\",io.openshift.build.source-location=\"https://github.com/openshift/ruby-hello-world\",io.openshift.build.commit.author=\"Ben Parees \"\nIncremental Build:\t\tdisabled\nRemove Old Build:\t\tdisabled\nBuilder Pull Policy:\t\tif-not-present\nPrevious Image Pull Policy:\talways\nQuiet:\t\t\t\tdisabled\nLayered Build:\t\t\tdisabled\nWorkdir:\t\t\t/tmp\nDocker NetworkMode:\t\tcontainer:6ac87d961b3020fbe3d86b90d6579b829470b51709b44d8ac79a8e89ad99fbad\nDocker Endpoint:\t\tunix:///var/run/docker.sock\n" 2018-07-10T02:23:28.567199324Z I0710 02:23:28.567118 1 docker.go:510] Using locally available image "centos/ruby-22-centos7:latest" 2018-07-10T02:23:28.577754675Z I0710 02:23:28.577607 1 docker.go:510] Using locally available image "centos/ruby-22-centos7:latest" 2018-07-10T02:23:28.577769638Z I0710 02:23:28.577630 1 docker.go:741] Image sha256:e42d0dccf073123561d83ea8bbc9f0cc5e491cfd07130a464a416cdb99ced387 contains io.openshift.s2i.scripts-url set to "image:///usr/libexec/s2i" 2018-07-10T02:23:28.577776766Z I0710 02:23:28.577648 1 scm.go:20] DownloadForSource /tmp/build/inputs 2018-07-10T02:23:28.577782236Z I0710 02:23:28.577669 1 builder.go:247] Starting S2I build from e2e-test-build-no-outputname-rnrpr/test-sti-1 BuildConfig ... 2018-07-10T02:23:28.577787767Z I0710 02:23:28.577684 1 sti.go:198] Preparing to build temp.builder.openshift.io/e2e-test-build-no-outputname-rnrpr/test-sti-1:ecd51cb5 2018-07-10T02:23:28.578081343Z I0710 02:23:28.577988 1 download.go:30] Copying sources from "/tmp/build/inputs" to "/tmp/upload/src" 2018-07-10T02:23:28.578336019Z I0710 02:23:28.578235 1 fs.go:236] F "/tmp/build/inputs/README.md" -> "/tmp/upload/src/README.md" 2018-07-10T02:23:28.578452561Z I0710 02:23:28.578385 1 fs.go:236] F "/tmp/build/inputs/Dockerfile" -> "/tmp/upload/src/Dockerfile" 2018-07-10T02:23:28.578576451Z I0710 02:23:28.578510 1 fs.go:223] D "/tmp/build/inputs/.git" -> "/tmp/upload/src/.git" 2018-07-10T02:23:28.578803032Z I0710 02:23:28.578710 1 fs.go:223] D "/tmp/build/inputs/.git/objects" -> "/tmp/upload/src/.git/objects" 2018-07-10T02:23:28.578997818Z I0710 02:23:28.578930 1 fs.go:223] D "/tmp/build/inputs/.git/objects/bc" -> "/tmp/upload/src/.git/objects/bc" 2018-07-10T02:23:28.579276513Z I0710 02:23:28.579178 1 fs.go:236] F "/tmp/build/inputs/.git/objects/bc/1356d49e0dc5f1688c6d91dd0bfca270b1d2dc" -> "/tmp/upload/src/.git/objects/bc/1356d49e0dc5f1688c6d91dd0bfca270b1d2dc" 2018-07-10T02:23:28.579414257Z I0710 02:23:28.579332 1 fs.go:236] F "/tmp/build/inputs/.git/objects/bc/0cb8f548e62100af9f815e72b1dafe9ba1974d" -> "/tmp/upload/src/.git/objects/bc/0cb8f548e62100af9f815e72b1dafe9ba1974d" 2018-07-10T02:23:28.579581381Z I0710 02:23:28.579481 1 fs.go:223] D "/tmp/build/inputs/.git/objects/7c" -> "/tmp/upload/src/.git/objects/7c" 2018-07-10T02:23:28.579734628Z I0710 02:23:28.579660 1 fs.go:236] F "/tmp/build/inputs/.git/objects/7c/50358fe010152557f70cddc69d751fc1e559af" -> "/tmp/upload/src/.git/objects/7c/50358fe010152557f70cddc69d751fc1e559af" 2018-07-10T02:23:28.579924618Z I0710 02:23:28.579827 1 fs.go:236] F "/tmp/build/inputs/.git/objects/7c/cd3242c49c3868195ca9400a539fa611111096" -> "/tmp/upload/src/.git/objects/7c/cd3242c49c3868195ca9400a539fa611111096" 2018-07-10T02:23:28.580001492Z I0710 02:23:28.579917 1 fs.go:223] D "/tmp/build/inputs/.git/objects/6c" -> "/tmp/upload/src/.git/objects/6c" 2018-07-10T02:23:28.580312948Z I0710 02:23:28.580163 1 fs.go:236] F "/tmp/build/inputs/.git/objects/6c/d67a5b26558948520bfd7e803dcdedce0e7f92" -> "/tmp/upload/src/.git/objects/6c/d67a5b26558948520bfd7e803dcdedce0e7f92" 2018-07-10T02:23:28.580326364Z I0710 02:23:28.580254 1 fs.go:223] D "/tmp/build/inputs/.git/objects/57" -> "/tmp/upload/src/.git/objects/57" 2018-07-10T02:23:28.580504567Z I0710 02:23:28.580430 1 fs.go:236] F "/tmp/build/inputs/.git/objects/57/d3a71d22204c36e16f951d7317dcda004af5b0" -> "/tmp/upload/src/.git/objects/57/d3a71d22204c36e16f951d7317dcda004af5b0" 2018-07-10T02:23:28.58064698Z I0710 02:23:28.580579 1 fs.go:236] F "/tmp/build/inputs/.git/objects/57/0bd16c41745891a5aabc60399d1a743c231236" -> "/tmp/upload/src/.git/objects/57/0bd16c41745891a5aabc60399d1a743c231236" 2018-07-10T02:23:28.580784187Z I0710 02:23:28.580709 1 fs.go:223] D "/tmp/build/inputs/.git/objects/f0" -> "/tmp/upload/src/.git/objects/f0" 2018-07-10T02:23:28.580993036Z I0710 02:23:28.580915 1 fs.go:236] F "/tmp/build/inputs/.git/objects/f0/3b50e4f9ec339f7a75ec5fd4f3af255e3e74ec" -> "/tmp/upload/src/.git/objects/f0/3b50e4f9ec339f7a75ec5fd4f3af255e3e74ec" 2018-07-10T02:23:28.58113137Z I0710 02:23:28.581065 1 fs.go:223] D "/tmp/build/inputs/.git/objects/e6" -> "/tmp/upload/src/.git/objects/e6" 2018-07-10T02:23:28.581429076Z I0710 02:23:28.581276 1 fs.go:236] F "/tmp/build/inputs/.git/objects/e6/2d5f04ada6459f2ccd61eb6a1f37c99077a919" -> "/tmp/upload/src/.git/objects/e6/2d5f04ada6459f2ccd61eb6a1f37c99077a919" 2018-07-10T02:23:28.581442802Z I0710 02:23:28.581373 1 fs.go:223] D "/tmp/build/inputs/.git/objects/2b" -> "/tmp/upload/src/.git/objects/2b" 2018-07-10T02:23:28.581621897Z I0710 02:23:28.581544 1 fs.go:236] F "/tmp/build/inputs/.git/objects/2b/b6c76c29870c9b4a9cff52cfc41f7e6bf44329" -> "/tmp/upload/src/.git/objects/2b/b6c76c29870c9b4a9cff52cfc41f7e6bf44329" 2018-07-10T02:23:28.581726235Z I0710 02:23:28.581665 1 fs.go:223] D "/tmp/build/inputs/.git/objects/6d" -> "/tmp/upload/src/.git/objects/6d" 2018-07-10T02:23:28.5819415Z I0710 02:23:28.581853 1 fs.go:236] F "/tmp/build/inputs/.git/objects/6d/98b321f0f4d9d69aee86cb71247bdf78a18613" -> "/tmp/upload/src/.git/objects/6d/98b321f0f4d9d69aee86cb71247bdf78a18613" 2018-07-10T02:23:28.582056655Z I0710 02:23:28.581987 1 fs.go:223] D "/tmp/build/inputs/.git/objects/af" -> "/tmp/upload/src/.git/objects/af" 2018-07-10T02:23:28.583365118Z I0710 02:23:28.583283 1 fs.go:236] F "/tmp/build/inputs/.git/objects/af/f947776b1769b6e667d154122d645f7b150a83" -> "/tmp/upload/src/.git/objects/af/f947776b1769b6e667d154122d645f7b150a83" 2018-07-10T02:23:28.583559243Z I0710 02:23:28.583464 1 fs.go:223] D "/tmp/build/inputs/.git/objects/pack" -> "/tmp/upload/src/.git/objects/pack" 2018-07-10T02:23:28.583683769Z I0710 02:23:28.583622 1 fs.go:223] D "/tmp/build/inputs/.git/objects/7d" -> "/tmp/upload/src/.git/objects/7d" 2018-07-10T02:23:28.583948513Z I0710 02:23:28.583849 1 fs.go:236] F "/tmp/build/inputs/.git/objects/7d/6bbc17aa73403f45f7e2b5548a8faf6795ffec" -> "/tmp/upload/src/.git/objects/7d/6bbc17aa73403f45f7e2b5548a8faf6795ffec" 2018-07-10T02:23:28.584027772Z I0710 02:23:28.583939 1 fs.go:223] D "/tmp/build/inputs/.git/objects/d5" -> "/tmp/upload/src/.git/objects/d5" 2018-07-10T02:23:28.584265435Z I0710 02:23:28.584181 1 fs.go:236] F "/tmp/build/inputs/.git/objects/d5/e7cbce6cb4fb52a415fd7d84e211199cb89735" -> "/tmp/upload/src/.git/objects/d5/e7cbce6cb4fb52a415fd7d84e211199cb89735" 2018-07-10T02:23:28.584353733Z I0710 02:23:28.584291 1 fs.go:223] D "/tmp/build/inputs/.git/objects/b3" -> "/tmp/upload/src/.git/objects/b3" 2018-07-10T02:23:28.584541816Z I0710 02:23:28.584474 1 fs.go:236] F "/tmp/build/inputs/.git/objects/b3/5478f5ae420bdf313be0050f405a8957b97153" -> "/tmp/upload/src/.git/objects/b3/5478f5ae420bdf313be0050f405a8957b97153" 2018-07-10T02:23:28.584671885Z I0710 02:23:28.584603 1 fs.go:223] D "/tmp/build/inputs/.git/objects/cd" -> "/tmp/upload/src/.git/objects/cd" 2018-07-10T02:23:28.584859513Z I0710 02:23:28.584786 1 fs.go:236] F "/tmp/build/inputs/.git/objects/cd/b36b569837bce573dc3a1d8a951add228feeeb" -> "/tmp/upload/src/.git/objects/cd/b36b569837bce573dc3a1d8a951add228feeeb" 2018-07-10T02:23:28.585001299Z I0710 02:23:28.584922 1 fs.go:223] D "/tmp/build/inputs/.git/objects/9d" -> "/tmp/upload/src/.git/objects/9d" 2018-07-10T02:23:28.585190932Z I0710 02:23:28.585116 1 fs.go:236] F "/tmp/build/inputs/.git/objects/9d/920bc2a425ce2ef02156fce1d79b4d4f091245" -> "/tmp/upload/src/.git/objects/9d/920bc2a425ce2ef02156fce1d79b4d4f091245" 2018-07-10T02:23:28.585390772Z I0710 02:23:28.585261 1 fs.go:223] D "/tmp/build/inputs/.git/objects/8f" -> "/tmp/upload/src/.git/objects/8f" 2018-07-10T02:23:28.585544815Z I0710 02:23:28.585455 1 fs.go:236] F "/tmp/build/inputs/.git/objects/8f/1b5fb76498255f4d2fc2308209116c562324dc" -> "/tmp/upload/src/.git/objects/8f/1b5fb76498255f4d2fc2308209116c562324dc" 2018-07-10T02:23:28.585660659Z I0710 02:23:28.585593 1 fs.go:223] D "/tmp/build/inputs/.git/objects/07" -> "/tmp/upload/src/.git/objects/07" 2018-07-10T02:23:28.585852011Z I0710 02:23:28.585778 1 fs.go:236] F "/tmp/build/inputs/.git/objects/07/ab3a68fa5e6a12d284cae2c05b7ca20b0182d8" -> "/tmp/upload/src/.git/objects/07/ab3a68fa5e6a12d284cae2c05b7ca20b0182d8" 2018-07-10T02:23:28.586026338Z I0710 02:23:28.585932 1 fs.go:223] D "/tmp/build/inputs/.git/objects/f8" -> "/tmp/upload/src/.git/objects/f8" 2018-07-10T02:23:28.58619186Z I0710 02:23:28.586121 1 fs.go:236] F "/tmp/build/inputs/.git/objects/f8/25659db8127d255def0af624073151662b09c3" -> "/tmp/upload/src/.git/objects/f8/25659db8127d255def0af624073151662b09c3" 2018-07-10T02:23:28.586367542Z I0710 02:23:28.586276 1 fs.go:223] D "/tmp/build/inputs/.git/objects/03" -> "/tmp/upload/src/.git/objects/03" 2018-07-10T02:23:28.586552869Z I0710 02:23:28.586478 1 fs.go:236] F "/tmp/build/inputs/.git/objects/03/ea8db67ae0f2628e70b93d0ecb7ff6cb8839cc" -> "/tmp/upload/src/.git/objects/03/ea8db67ae0f2628e70b93d0ecb7ff6cb8839cc" 2018-07-10T02:23:28.586762904Z I0710 02:23:28.586630 1 fs.go:223] D "/tmp/build/inputs/.git/objects/ca" -> "/tmp/upload/src/.git/objects/ca" 2018-07-10T02:23:28.586948959Z I0710 02:23:28.586854 1 fs.go:236] F "/tmp/build/inputs/.git/objects/ca/70bdc8c829db40a044fed3133f45cbc58f9c21" -> "/tmp/upload/src/.git/objects/ca/70bdc8c829db40a044fed3133f45cbc58f9c21" 2018-07-10T02:23:28.587086819Z I0710 02:23:28.587007 1 fs.go:223] D "/tmp/build/inputs/.git/objects/db" -> "/tmp/upload/src/.git/objects/db" 2018-07-10T02:23:28.587356142Z I0710 02:23:28.587246 1 fs.go:236] F "/tmp/build/inputs/.git/objects/db/84f883f29fb7c98b4309874a07ab0a91789f44" -> "/tmp/upload/src/.git/objects/db/84f883f29fb7c98b4309874a07ab0a91789f44" 2018-07-10T02:23:28.587415189Z I0710 02:23:28.587344 1 fs.go:223] D "/tmp/build/inputs/.git/objects/31" -> "/tmp/upload/src/.git/objects/31" 2018-07-10T02:23:28.587657549Z I0710 02:23:28.587560 1 fs.go:236] F "/tmp/build/inputs/.git/objects/31/056df85336b9e9ffafad75681f817d9f33c7dd" -> "/tmp/upload/src/.git/objects/31/056df85336b9e9ffafad75681f817d9f33c7dd" 2018-07-10T02:23:28.587686944Z I0710 02:23:28.587645 1 fs.go:223] D "/tmp/build/inputs/.git/objects/3e" -> "/tmp/upload/src/.git/objects/3e" 2018-07-10T02:23:28.587917162Z I0710 02:23:28.587829 1 fs.go:236] F "/tmp/build/inputs/.git/objects/3e/9aa8d62ebbe7f50c8e3aff4dd43d4726389f45" -> "/tmp/upload/src/.git/objects/3e/9aa8d62ebbe7f50c8e3aff4dd43d4726389f45" 2018-07-10T02:23:28.58797754Z I0710 02:23:28.587914 1 fs.go:223] D "/tmp/build/inputs/.git/objects/df" -> "/tmp/upload/src/.git/objects/df" 2018-07-10T02:23:28.588199292Z I0710 02:23:28.588101 1 fs.go:236] F "/tmp/build/inputs/.git/objects/df/6078b30a7a52303ee9e0ebe7cfba0c79178347" -> "/tmp/upload/src/.git/objects/df/6078b30a7a52303ee9e0ebe7cfba0c79178347" 2018-07-10T02:23:28.588228723Z I0710 02:23:28.588188 1 fs.go:223] D "/tmp/build/inputs/.git/objects/info" -> "/tmp/upload/src/.git/objects/info" 2018-07-10T02:23:28.588428702Z I0710 02:23:28.588318 1 fs.go:223] D "/tmp/build/inputs/.git/objects/0a" -> "/tmp/upload/src/.git/objects/0a" 2018-07-10T02:23:28.588595969Z I0710 02:23:28.588490 1 fs.go:236] F "/tmp/build/inputs/.git/objects/0a/5a0d2c143de3244295b7750eacdfff7b94d546" -> "/tmp/upload/src/.git/objects/0a/5a0d2c143de3244295b7750eacdfff7b94d546" 2018-07-10T02:23:28.588644345Z I0710 02:23:28.588593 1 fs.go:223] D "/tmp/build/inputs/.git/refs" -> "/tmp/upload/src/.git/refs" 2018-07-10T02:23:28.588814739Z I0710 02:23:28.588746 1 fs.go:223] D "/tmp/build/inputs/.git/refs/heads" -> "/tmp/upload/src/.git/refs/heads" 2018-07-10T02:23:28.589026665Z I0710 02:23:28.588919 1 fs.go:236] F "/tmp/build/inputs/.git/refs/heads/master" -> "/tmp/upload/src/.git/refs/heads/master" 2018-07-10T02:23:28.589082189Z I0710 02:23:28.589029 1 fs.go:223] D "/tmp/build/inputs/.git/refs/remotes" -> "/tmp/upload/src/.git/refs/remotes" 2018-07-10T02:23:28.589259786Z I0710 02:23:28.589182 1 fs.go:223] D "/tmp/build/inputs/.git/refs/remotes/origin" -> "/tmp/upload/src/.git/refs/remotes/origin" 2018-07-10T02:23:28.589450998Z I0710 02:23:28.589349 1 fs.go:236] F "/tmp/build/inputs/.git/refs/remotes/origin/HEAD" -> "/tmp/upload/src/.git/refs/remotes/origin/HEAD" 2018-07-10T02:23:28.589481137Z I0710 02:23:28.589436 1 fs.go:223] D "/tmp/build/inputs/.git/refs/tags" -> "/tmp/upload/src/.git/refs/tags" 2018-07-10T02:23:28.589716145Z I0710 02:23:28.589614 1 fs.go:236] F "/tmp/build/inputs/.git/packed-refs" -> "/tmp/upload/src/.git/packed-refs" 2018-07-10T02:23:28.589736995Z I0710 02:23:28.589712 1 fs.go:223] D "/tmp/build/inputs/.git/logs" -> "/tmp/upload/src/.git/logs" 2018-07-10T02:23:28.589908855Z I0710 02:23:28.589838 1 fs.go:223] D "/tmp/build/inputs/.git/logs/refs" -> "/tmp/upload/src/.git/logs/refs" 2018-07-10T02:23:28.590087411Z I0710 02:23:28.589984 1 fs.go:223] D "/tmp/build/inputs/.git/logs/refs/heads" -> "/tmp/upload/src/.git/logs/refs/heads" 2018-07-10T02:23:28.590286973Z I0710 02:23:28.590166 1 fs.go:236] F "/tmp/build/inputs/.git/logs/refs/heads/master" -> "/tmp/upload/src/.git/logs/refs/heads/master" 2018-07-10T02:23:28.590299739Z I0710 02:23:28.590254 1 fs.go:223] D "/tmp/build/inputs/.git/logs/refs/remotes" -> "/tmp/upload/src/.git/logs/refs/remotes" 2018-07-10T02:23:28.590494157Z I0710 02:23:28.590377 1 fs.go:223] D "/tmp/build/inputs/.git/logs/refs/remotes/origin" -> "/tmp/upload/src/.git/logs/refs/remotes/origin" 2018-07-10T02:23:28.590661207Z I0710 02:23:28.590557 1 fs.go:236] F "/tmp/build/inputs/.git/logs/refs/remotes/origin/HEAD" -> "/tmp/upload/src/.git/logs/refs/remotes/origin/HEAD" 2018-07-10T02:23:28.590795461Z I0710 02:23:28.590687 1 fs.go:236] F "/tmp/build/inputs/.git/logs/HEAD" -> "/tmp/upload/src/.git/logs/HEAD" 2018-07-10T02:23:28.590889818Z I0710 02:23:28.590815 1 fs.go:236] F "/tmp/build/inputs/.git/config" -> "/tmp/upload/src/.git/config" 2018-07-10T02:23:28.591034826Z I0710 02:23:28.590930 1 fs.go:236] F "/tmp/build/inputs/.git/HEAD" -> "/tmp/upload/src/.git/HEAD" 2018-07-10T02:23:28.591134609Z I0710 02:23:28.591072 1 fs.go:236] F "/tmp/build/inputs/.git/description" -> "/tmp/upload/src/.git/description" 2018-07-10T02:23:28.591192056Z I0710 02:23:28.591149 1 fs.go:223] D "/tmp/build/inputs/.git/branches" -> "/tmp/upload/src/.git/branches" 2018-07-10T02:23:28.591376453Z I0710 02:23:28.591279 1 fs.go:223] D "/tmp/build/inputs/.git/hooks" -> "/tmp/upload/src/.git/hooks" 2018-07-10T02:23:28.591566727Z I0710 02:23:28.591452 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/update.sample" -> "/tmp/upload/src/.git/hooks/update.sample" 2018-07-10T02:23:28.591631181Z I0710 02:23:28.591576 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/post-update.sample" -> "/tmp/upload/src/.git/hooks/post-update.sample" 2018-07-10T02:23:28.591833909Z I0710 02:23:28.591733 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/commit-msg.sample" -> "/tmp/upload/src/.git/hooks/commit-msg.sample" 2018-07-10T02:23:28.591917039Z I0710 02:23:28.591854 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/pre-applypatch.sample" -> "/tmp/upload/src/.git/hooks/pre-applypatch.sample" 2018-07-10T02:23:28.59227005Z I0710 02:23:28.592155 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/prepare-commit-msg.sample" -> "/tmp/upload/src/.git/hooks/prepare-commit-msg.sample" 2018-07-10T02:23:28.592334138Z I0710 02:23:28.592281 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/pre-commit.sample" -> "/tmp/upload/src/.git/hooks/pre-commit.sample" 2018-07-10T02:23:28.592537954Z I0710 02:23:28.592440 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/pre-rebase.sample" -> "/tmp/upload/src/.git/hooks/pre-rebase.sample" 2018-07-10T02:23:28.592703877Z I0710 02:23:28.592598 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/applypatch-msg.sample" -> "/tmp/upload/src/.git/hooks/applypatch-msg.sample" 2018-07-10T02:23:28.592767897Z I0710 02:23:28.592715 1 fs.go:236] F "/tmp/build/inputs/.git/hooks/pre-push.sample" -> "/tmp/upload/src/.git/hooks/pre-push.sample" 2018-07-10T02:23:28.592987039Z I0710 02:23:28.592876 1 fs.go:236] F "/tmp/build/inputs/.git/index" -> "/tmp/upload/src/.git/index" 2018-07-10T02:23:28.592999784Z I0710 02:23:28.592952 1 fs.go:223] D "/tmp/build/inputs/.git/info" -> "/tmp/upload/src/.git/info" 2018-07-10T02:23:28.593193274Z I0710 02:23:28.593112 1 fs.go:236] F "/tmp/build/inputs/.git/info/exclude" -> "/tmp/upload/src/.git/info/exclude" 2018-07-10T02:23:28.593296346Z I0710 02:23:28.593235 1 fs.go:236] F "/tmp/build/inputs/.git/shallow" -> "/tmp/upload/src/.git/shallow" 2018-07-10T02:23:28.593488884Z I0710 02:23:28.593382 1 fs.go:236] F "/tmp/build/inputs/.gitignore" -> "/tmp/upload/src/.gitignore" 2018-07-10T02:23:28.59351635Z I0710 02:23:28.593488 1 fs.go:236] F "/tmp/build/inputs/Rakefile" -> "/tmp/upload/src/Rakefile" 2018-07-10T02:23:28.593691044Z I0710 02:23:28.593572 1 fs.go:223] D "/tmp/build/inputs/config" -> "/tmp/upload/src/config" 2018-07-10T02:23:28.593820755Z I0710 02:23:28.593728 1 fs.go:236] F "/tmp/build/inputs/config/database.rb" -> "/tmp/upload/src/config/database.rb" 2018-07-10T02:23:28.593939597Z I0710 02:23:28.593862 1 fs.go:236] F "/tmp/build/inputs/config/database.yml" -> "/tmp/upload/src/config/database.yml" 2018-07-10T02:23:28.593988886Z I0710 02:23:28.593940 1 fs.go:223] D "/tmp/build/inputs/test" -> "/tmp/upload/src/test" 2018-07-10T02:23:28.594257002Z I0710 02:23:28.594135 1 fs.go:236] F "/tmp/build/inputs/test/sample_test.rb" -> "/tmp/upload/src/test/sample_test.rb" 2018-07-10T02:23:28.594279663Z I0710 02:23:28.594253 1 fs.go:236] F "/tmp/build/inputs/app.rb" -> "/tmp/upload/src/app.rb" 2018-07-10T02:23:28.594429458Z I0710 02:23:28.594365 1 fs.go:236] F "/tmp/build/inputs/Gemfile" -> "/tmp/upload/src/Gemfile" 2018-07-10T02:23:28.594609923Z I0710 02:23:28.594504 1 fs.go:236] F "/tmp/build/inputs/config.ru" -> "/tmp/upload/src/config.ru" 2018-07-10T02:23:28.594668873Z I0710 02:23:28.594615 1 fs.go:236] F "/tmp/build/inputs/models.rb" -> "/tmp/upload/src/models.rb" 2018-07-10T02:23:28.594882188Z I0710 02:23:28.594778 1 fs.go:236] F "/tmp/build/inputs/Gemfile.lock" -> "/tmp/upload/src/Gemfile.lock" 2018-07-10T02:23:28.594912679Z I0710 02:23:28.594868 1 fs.go:223] D "/tmp/build/inputs/.s2i" -> "/tmp/upload/src/.s2i" 2018-07-10T02:23:28.595130441Z I0710 02:23:28.595009 1 fs.go:223] D "/tmp/build/inputs/.s2i/bin" -> "/tmp/upload/src/.s2i/bin" 2018-07-10T02:23:28.595287977Z I0710 02:23:28.595179 1 fs.go:236] F "/tmp/build/inputs/.s2i/bin/README" -> "/tmp/upload/src/.s2i/bin/README" 2018-07-10T02:23:28.595411326Z I0710 02:23:28.595334 1 fs.go:236] F "/tmp/build/inputs/.s2i/environment" -> "/tmp/upload/src/.s2i/environment" 2018-07-10T02:23:28.595524081Z I0710 02:23:28.595462 1 fs.go:236] F "/tmp/build/inputs/.travis.yml" -> "/tmp/upload/src/.travis.yml" 2018-07-10T02:23:28.5956869Z I0710 02:23:28.595568 1 fs.go:236] F "/tmp/build/inputs/run.sh" -> "/tmp/upload/src/run.sh" 2018-07-10T02:23:28.595699562Z I0710 02:23:28.595644 1 fs.go:223] D "/tmp/build/inputs/db" -> "/tmp/upload/src/db" 2018-07-10T02:23:28.595857764Z I0710 02:23:28.595789 1 fs.go:223] D "/tmp/build/inputs/db/migrate" -> "/tmp/upload/src/db/migrate" 2018-07-10T02:23:28.596077991Z I0710 02:23:28.595958 1 fs.go:236] F "/tmp/build/inputs/db/migrate/20141102191902_create_key_pair.rb" -> "/tmp/upload/src/db/migrate/20141102191902_create_key_pair.rb" 2018-07-10T02:23:28.596109102Z I0710 02:23:28.596064 1 fs.go:223] D "/tmp/build/inputs/views" -> "/tmp/upload/src/views" 2018-07-10T02:23:28.596336387Z I0710 02:23:28.596237 1 fs.go:236] F "/tmp/build/inputs/views/main.erb" -> "/tmp/upload/src/views/main.erb" 2018-07-10T02:23:28.596365857Z I0710 02:23:28.596338 1 install.go:249] Using "assemble" installed from "image:///usr/libexec/s2i/assemble" 2018-07-10T02:23:28.596497019Z I0710 02:23:28.596398 1 install.go:249] Using "run" installed from "image:///usr/libexec/s2i/run" 2018-07-10T02:23:28.596509354Z I0710 02:23:28.596439 1 install.go:249] Using "save-artifacts" installed from "image:///usr/libexec/s2i/save-artifacts" 2018-07-10T02:23:28.596516199Z I0710 02:23:28.596469 1 ignore.go:63] .s2iignore file does not exist 2018-07-10T02:23:28.596537721Z I0710 02:23:28.596501 1 sti.go:207] Clean build will be performed 2018-07-10T02:23:28.596588632Z I0710 02:23:28.596530 1 sti.go:210] Performing source build from /tmp/build/inputs 2018-07-10T02:23:28.596648207Z I0710 02:23:28.596594 1 sti.go:221] Running "assemble" in "temp.builder.openshift.io/e2e-test-build-no-outputname-rnrpr/test-sti-1:ecd51cb5" 2018-07-10T02:23:28.596736219Z I0710 02:23:28.596626 1 sti.go:559] Using image name centos/ruby-22-centos7 2018-07-10T02:23:28.598936859Z I0710 02:23:28.598813 1 docker.go:510] Using locally available image "centos/ruby-22-centos7:latest" 2018-07-10T02:23:28.598953504Z I0710 02:23:28.598869 1 environment.go:45] Setting 1 environment variables provided by environment file in sources 2018-07-10T02:23:28.599256103Z I0710 02:23:28.599004 1 sti.go:673] starting the source uploading ... 2018-07-10T02:23:28.599270034Z I0710 02:23:28.599103 1 tar.go:217] Adding "/tmp/upload" to tar ... 2018-07-10T02:23:28.599444646Z I0710 02:23:28.599382 1 tar.go:312] Adding to tar: /tmp/upload/scripts as scripts 2018-07-10T02:23:28.603374388Z I0710 02:23:28.603276 1 docker.go:741] Image sha256:e42d0dccf073123561d83ea8bbc9f0cc5e491cfd07130a464a416cdb99ced387 contains io.openshift.s2i.scripts-url set to "image:///usr/libexec/s2i" 2018-07-10T02:23:28.603390523Z I0710 02:23:28.603298 1 docker.go:815] Base directory for S2I scripts is '/usr/libexec/s2i'. Untarring destination is '/tmp'. 2018-07-10T02:23:28.603397267Z I0710 02:23:28.603315 1 docker.go:972] Setting "/bin/sh -c tar -C /tmp -xf - && /usr/libexec/s2i/assemble" command for container ... 2018-07-10T02:23:28.603585228Z I0710 02:23:28.603486 1 docker.go:981] Creating container with options {Name:"s2i_centos_ruby_22_centos7_2af28100" Config:{Hostname: Domainname: User: AttachStdin:false AttachStdout:true AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:true StdinOnce:true Env:[RACK_ENV=production OPENSHIFT_BUILD_NAME=test-sti-1 OPENSHIFT_BUILD_NAMESPACE=e2e-test-build-no-outputname-rnrpr OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world OPENSHIFT_BUILD_COMMIT=7ccd3242c49c3868195ca9400a539fa611111096 BUILD_LOGLEVEL=5] Cmd:[/bin/sh -c tar -C /tmp -xf - && /usr/libexec/s2i/assemble] Healthcheck: ArgsEscaped:false Image:centos/ruby-22-centos7:latest Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[] StopSignal: StopTimeout: Shell:[]} HostConfig:&{Binds:[] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode:container:6ac87d961b3020fbe3d86b90d6579b829470b51709b44d8ac79a8e89ad99fbad PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[KILL MKNOD SETGID SETUID] DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:false PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[] UTSMode: UsernsMode: ShmSize:67108864 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:92233720368547 NanoCPUs:0 CgroupParent:/kubepods/besteffort/pod38a3941a-83e8-11e8-84c6-0af96768d57e BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DiskQuota:0 KernelMemory:0 MemoryReservation:0 MemorySwap:92233720368547 MemorySwappiness: OomKillDisable: PidsLimit:0 Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] Init:}} ... 2018-07-10T02:23:28.639527823Z I0710 02:23:28.639453 1 docker.go:1013] Attaching to container "7044d59fddc02bfd52dfa812204c29a2620086c0cdaf04ead96f8753d8d88fa9" ... 2018-07-10T02:23:28.642632019Z I0710 02:23:28.639963 1 docker.go:1024] Starting container "7044d59fddc02bfd52dfa812204c29a2620086c0cdaf04ead96f8753d8d88fa9" ... 2018-07-10T02:23:28.767361437Z I0710 02:23:28.766825 1 tar.go:312] Adding to tar: /tmp/upload/src as src 2018-07-10T02:23:28.767381549Z I0710 02:23:28.766989 1 tar.go:312] Adding to tar: /tmp/upload/src/.git as src/.git 2018-07-10T02:23:28.767389077Z I0710 02:23:28.767101 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/HEAD as src/.git/HEAD 2018-07-10T02:23:28.767394886Z I0710 02:23:28.767225 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/branches as src/.git/branches 2018-07-10T02:23:28.767401387Z I0710 02:23:28.767309 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/config as src/.git/config 2018-07-10T02:23:28.767839204Z I0710 02:23:28.767425 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/description as src/.git/description 2018-07-10T02:23:28.767852816Z I0710 02:23:28.767556 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks as src/.git/hooks 2018-07-10T02:23:28.76785992Z I0710 02:23:28.767637 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/applypatch-msg.sample as src/.git/hooks/applypatch-msg.sample 2018-07-10T02:23:28.767866051Z I0710 02:23:28.767730 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/commit-msg.sample as src/.git/hooks/commit-msg.sample 2018-07-10T02:23:28.767891235Z I0710 02:23:28.767822 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/post-update.sample as src/.git/hooks/post-update.sample 2018-07-10T02:23:28.770102251Z I0710 02:23:28.768122 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/pre-applypatch.sample as src/.git/hooks/pre-applypatch.sample 2018-07-10T02:23:28.770118008Z I0710 02:23:28.768327 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/pre-commit.sample as src/.git/hooks/pre-commit.sample 2018-07-10T02:23:28.770128234Z I0710 02:23:28.768459 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/pre-push.sample as src/.git/hooks/pre-push.sample 2018-07-10T02:23:28.770135071Z I0710 02:23:28.768562 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/pre-rebase.sample as src/.git/hooks/pre-rebase.sample 2018-07-10T02:23:28.770149038Z I0710 02:23:28.768720 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/prepare-commit-msg.sample as src/.git/hooks/prepare-commit-msg.sample 2018-07-10T02:23:28.770155945Z I0710 02:23:28.768852 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/hooks/update.sample as src/.git/hooks/update.sample 2018-07-10T02:23:28.770161948Z I0710 02:23:28.768987 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/index as src/.git/index 2018-07-10T02:23:28.770167626Z I0710 02:23:28.769151 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/info as src/.git/info 2018-07-10T02:23:28.770173149Z I0710 02:23:28.769253 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/info/exclude as src/.git/info/exclude 2018-07-10T02:23:28.770178608Z I0710 02:23:28.769393 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/logs as src/.git/logs 2018-07-10T02:23:28.770184327Z I0710 02:23:28.769511 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/logs/HEAD as src/.git/logs/HEAD 2018-07-10T02:23:28.770190484Z I0710 02:23:28.769677 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/logs/refs as src/.git/logs/refs 2018-07-10T02:23:28.770196225Z I0710 02:23:28.769801 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/logs/refs/heads as src/.git/logs/refs/heads 2018-07-10T02:23:28.770201986Z I0710 02:23:28.769908 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/logs/refs/heads/master as src/.git/logs/refs/heads/master 2018-07-10T02:23:28.770969163Z I0710 02:23:28.770058 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/logs/refs/remotes as src/.git/logs/refs/remotes 2018-07-10T02:23:28.7763259Z I0710 02:23:28.776242 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/logs/refs/remotes/origin as src/.git/logs/refs/remotes/origin 2018-07-10T02:23:28.776710772Z I0710 02:23:28.776660 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/logs/refs/remotes/origin/HEAD as src/.git/logs/refs/remotes/origin/HEAD 2018-07-10T02:23:28.777227389Z I0710 02:23:28.777168 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects as src/.git/objects 2018-07-10T02:23:28.777663645Z I0710 02:23:28.777619 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/03 as src/.git/objects/03 2018-07-10T02:23:28.777912869Z I0710 02:23:28.777842 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/03/ea8db67ae0f2628e70b93d0ecb7ff6cb8839cc as src/.git/objects/03/ea8db67ae0f2628e70b93d0ecb7ff6cb8839cc 2018-07-10T02:23:28.778966098Z I0710 02:23:28.778893 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/07 as src/.git/objects/07 2018-07-10T02:23:28.780372324Z I0710 02:23:28.780318 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/07/ab3a68fa5e6a12d284cae2c05b7ca20b0182d8 as src/.git/objects/07/ab3a68fa5e6a12d284cae2c05b7ca20b0182d8 2018-07-10T02:23:28.780725063Z I0710 02:23:28.780680 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/0a as src/.git/objects/0a 2018-07-10T02:23:28.780959676Z I0710 02:23:28.780917 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/0a/5a0d2c143de3244295b7750eacdfff7b94d546 as src/.git/objects/0a/5a0d2c143de3244295b7750eacdfff7b94d546 2018-07-10T02:23:28.781730812Z I0710 02:23:28.781665 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/2b as src/.git/objects/2b 2018-07-10T02:23:28.782071068Z I0710 02:23:28.781998 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/2b/b6c76c29870c9b4a9cff52cfc41f7e6bf44329 as src/.git/objects/2b/b6c76c29870c9b4a9cff52cfc41f7e6bf44329 2018-07-10T02:23:28.782451292Z I0710 02:23:28.782395 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/31 as src/.git/objects/31 2018-07-10T02:23:28.782774709Z I0710 02:23:28.782722 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/31/056df85336b9e9ffafad75681f817d9f33c7dd as src/.git/objects/31/056df85336b9e9ffafad75681f817d9f33c7dd 2018-07-10T02:23:28.783207867Z I0710 02:23:28.783146 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/3e as src/.git/objects/3e 2018-07-10T02:23:28.783558224Z I0710 02:23:28.783506 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/3e/9aa8d62ebbe7f50c8e3aff4dd43d4726389f45 as src/.git/objects/3e/9aa8d62ebbe7f50c8e3aff4dd43d4726389f45 2018-07-10T02:23:28.783935816Z I0710 02:23:28.783882 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/57 as src/.git/objects/57 2018-07-10T02:23:28.784309957Z I0710 02:23:28.784257 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/57/0bd16c41745891a5aabc60399d1a743c231236 as src/.git/objects/57/0bd16c41745891a5aabc60399d1a743c231236 2018-07-10T02:23:28.784675843Z I0710 02:23:28.784627 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/57/d3a71d22204c36e16f951d7317dcda004af5b0 as src/.git/objects/57/d3a71d22204c36e16f951d7317dcda004af5b0 2018-07-10T02:23:28.785175186Z I0710 02:23:28.785112 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/6c as src/.git/objects/6c 2018-07-10T02:23:28.785525442Z I0710 02:23:28.785475 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/6c/d67a5b26558948520bfd7e803dcdedce0e7f92 as src/.git/objects/6c/d67a5b26558948520bfd7e803dcdedce0e7f92 2018-07-10T02:23:28.785906654Z I0710 02:23:28.785855 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/6d as src/.git/objects/6d 2018-07-10T02:23:28.786288957Z I0710 02:23:28.786231 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/6d/98b321f0f4d9d69aee86cb71247bdf78a18613 as src/.git/objects/6d/98b321f0f4d9d69aee86cb71247bdf78a18613 2018-07-10T02:23:28.78763058Z I0710 02:23:28.787563 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/7c as src/.git/objects/7c 2018-07-10T02:23:28.78798896Z I0710 02:23:28.787938 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/7c/50358fe010152557f70cddc69d751fc1e559af as src/.git/objects/7c/50358fe010152557f70cddc69d751fc1e559af 2018-07-10T02:23:28.788411961Z I0710 02:23:28.788351 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/7c/cd3242c49c3868195ca9400a539fa611111096 as src/.git/objects/7c/cd3242c49c3868195ca9400a539fa611111096 2018-07-10T02:23:28.788900998Z I0710 02:23:28.788856 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/7d as src/.git/objects/7d 2018-07-10T02:23:28.789410992Z I0710 02:23:28.789315 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/7d/6bbc17aa73403f45f7e2b5548a8faf6795ffec as src/.git/objects/7d/6bbc17aa73403f45f7e2b5548a8faf6795ffec 2018-07-10T02:23:28.78967323Z I0710 02:23:28.789617 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/8f as src/.git/objects/8f 2018-07-10T02:23:28.789932904Z I0710 02:23:28.789833 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/8f/1b5fb76498255f4d2fc2308209116c562324dc as src/.git/objects/8f/1b5fb76498255f4d2fc2308209116c562324dc 2018-07-10T02:23:28.790331614Z I0710 02:23:28.790259 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/9d as src/.git/objects/9d 2018-07-10T02:23:28.790612717Z I0710 02:23:28.790526 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/9d/920bc2a425ce2ef02156fce1d79b4d4f091245 as src/.git/objects/9d/920bc2a425ce2ef02156fce1d79b4d4f091245 2018-07-10T02:23:28.790784703Z I0710 02:23:28.790732 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/af as src/.git/objects/af 2018-07-10T02:23:28.790927046Z I0710 02:23:28.790876 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/af/f947776b1769b6e667d154122d645f7b150a83 as src/.git/objects/af/f947776b1769b6e667d154122d645f7b150a83 2018-07-10T02:23:28.79138465Z I0710 02:23:28.791230 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/b3 as src/.git/objects/b3 2018-07-10T02:23:28.791517571Z I0710 02:23:28.791454 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/b3/5478f5ae420bdf313be0050f405a8957b97153 as src/.git/objects/b3/5478f5ae420bdf313be0050f405a8957b97153 2018-07-10T02:23:28.79188411Z I0710 02:23:28.791820 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/bc as src/.git/objects/bc 2018-07-10T02:23:28.792219885Z I0710 02:23:28.792168 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/bc/0cb8f548e62100af9f815e72b1dafe9ba1974d as src/.git/objects/bc/0cb8f548e62100af9f815e72b1dafe9ba1974d 2018-07-10T02:23:28.792559545Z I0710 02:23:28.792490 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/bc/1356d49e0dc5f1688c6d91dd0bfca270b1d2dc as src/.git/objects/bc/1356d49e0dc5f1688c6d91dd0bfca270b1d2dc 2018-07-10T02:23:28.793201149Z I0710 02:23:28.793120 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/ca as src/.git/objects/ca 2018-07-10T02:23:28.793365671Z I0710 02:23:28.793299 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/ca/70bdc8c829db40a044fed3133f45cbc58f9c21 as src/.git/objects/ca/70bdc8c829db40a044fed3133f45cbc58f9c21 2018-07-10T02:23:28.793598292Z I0710 02:23:28.793500 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/cd as src/.git/objects/cd 2018-07-10T02:23:28.793960092Z I0710 02:23:28.793868 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/cd/b36b569837bce573dc3a1d8a951add228feeeb as src/.git/objects/cd/b36b569837bce573dc3a1d8a951add228feeeb 2018-07-10T02:23:28.794304385Z I0710 02:23:28.794240 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/d5 as src/.git/objects/d5 2018-07-10T02:23:28.794577222Z I0710 02:23:28.794514 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/d5/e7cbce6cb4fb52a415fd7d84e211199cb89735 as src/.git/objects/d5/e7cbce6cb4fb52a415fd7d84e211199cb89735 2018-07-10T02:23:28.794954646Z I0710 02:23:28.794819 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/db as src/.git/objects/db 2018-07-10T02:23:28.795182309Z I0710 02:23:28.795080 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/db/84f883f29fb7c98b4309874a07ab0a91789f44 as src/.git/objects/db/84f883f29fb7c98b4309874a07ab0a91789f44 2018-07-10T02:23:28.795441554Z I0710 02:23:28.795392 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/df as src/.git/objects/df 2018-07-10T02:23:28.7963312Z I0710 02:23:28.796218 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/df/6078b30a7a52303ee9e0ebe7cfba0c79178347 as src/.git/objects/df/6078b30a7a52303ee9e0ebe7cfba0c79178347 2018-07-10T02:23:28.796598584Z I0710 02:23:28.796527 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/e6 as src/.git/objects/e6 2018-07-10T02:23:28.796895316Z I0710 02:23:28.796808 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/e6/2d5f04ada6459f2ccd61eb6a1f37c99077a919 as src/.git/objects/e6/2d5f04ada6459f2ccd61eb6a1f37c99077a919 2018-07-10T02:23:28.797398426Z I0710 02:23:28.797260 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/f0 as src/.git/objects/f0 2018-07-10T02:23:28.797459984Z I0710 02:23:28.797422 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/f0/3b50e4f9ec339f7a75ec5fd4f3af255e3e74ec as src/.git/objects/f0/3b50e4f9ec339f7a75ec5fd4f3af255e3e74ec 2018-07-10T02:23:28.797853727Z I0710 02:23:28.797791 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/f8 as src/.git/objects/f8 2018-07-10T02:23:28.798058607Z I0710 02:23:28.797936 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/f8/25659db8127d255def0af624073151662b09c3 as src/.git/objects/f8/25659db8127d255def0af624073151662b09c3 2018-07-10T02:23:28.798408875Z I0710 02:23:28.798247 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/info as src/.git/objects/info 2018-07-10T02:23:28.798548017Z I0710 02:23:28.798477 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/objects/pack as src/.git/objects/pack 2018-07-10T02:23:28.798760027Z I0710 02:23:28.798687 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/packed-refs as src/.git/packed-refs 2018-07-10T02:23:28.799068637Z I0710 02:23:28.798981 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/refs as src/.git/refs 2018-07-10T02:23:28.799433753Z I0710 02:23:28.799335 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/refs/heads as src/.git/refs/heads 2018-07-10T02:23:28.800291661Z I0710 02:23:28.800225 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/refs/heads/master as src/.git/refs/heads/master 2018-07-10T02:23:28.800571057Z I0710 02:23:28.800502 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/refs/remotes as src/.git/refs/remotes 2018-07-10T02:23:28.800835086Z I0710 02:23:28.800750 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/refs/remotes/origin as src/.git/refs/remotes/origin 2018-07-10T02:23:28.800999373Z I0710 02:23:28.800907 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/refs/remotes/origin/HEAD as src/.git/refs/remotes/origin/HEAD 2018-07-10T02:23:28.801221434Z I0710 02:23:28.801166 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/refs/tags as src/.git/refs/tags 2018-07-10T02:23:28.801463067Z I0710 02:23:28.801369 1 tar.go:312] Adding to tar: /tmp/upload/src/.git/shallow as src/.git/shallow 2018-07-10T02:23:28.801684299Z I0710 02:23:28.801629 1 tar.go:312] Adding to tar: /tmp/upload/src/.gitignore as src/.gitignore 2018-07-10T02:23:28.802061183Z I0710 02:23:28.801973 1 tar.go:312] Adding to tar: /tmp/upload/src/.s2i as src/.s2i 2018-07-10T02:23:28.802323829Z I0710 02:23:28.802260 1 tar.go:312] Adding to tar: /tmp/upload/src/.s2i/bin as src/.s2i/bin 2018-07-10T02:23:28.802482642Z I0710 02:23:28.802422 1 tar.go:312] Adding to tar: /tmp/upload/src/.s2i/bin/README as src/.s2i/bin/README 2018-07-10T02:23:28.802743437Z I0710 02:23:28.802688 1 tar.go:312] Adding to tar: /tmp/upload/src/.s2i/environment as src/.s2i/environment 2018-07-10T02:23:28.80298795Z I0710 02:23:28.802923 1 tar.go:312] Adding to tar: /tmp/upload/src/.travis.yml as src/.travis.yml 2018-07-10T02:23:28.803410596Z I0710 02:23:28.803351 1 tar.go:312] Adding to tar: /tmp/upload/src/Dockerfile as src/Dockerfile 2018-07-10T02:23:28.804184367Z I0710 02:23:28.804114 1 tar.go:312] Adding to tar: /tmp/upload/src/Gemfile as src/Gemfile 2018-07-10T02:23:28.804306606Z I0710 02:23:28.804255 1 tar.go:312] Adding to tar: /tmp/upload/src/Gemfile.lock as src/Gemfile.lock 2018-07-10T02:23:28.804451907Z I0710 02:23:28.804417 1 tar.go:312] Adding to tar: /tmp/upload/src/README.md as src/README.md 2018-07-10T02:23:28.804630676Z I0710 02:23:28.804584 1 tar.go:312] Adding to tar: /tmp/upload/src/Rakefile as src/Rakefile 2018-07-10T02:23:28.804830967Z I0710 02:23:28.804733 1 tar.go:312] Adding to tar: /tmp/upload/src/app.rb as src/app.rb 2018-07-10T02:23:28.805113964Z I0710 02:23:28.804991 1 tar.go:312] Adding to tar: /tmp/upload/src/config as src/config 2018-07-10T02:23:28.805312299Z I0710 02:23:28.805251 1 tar.go:312] Adding to tar: /tmp/upload/src/config/database.rb as src/config/database.rb 2018-07-10T02:23:28.805559151Z I0710 02:23:28.805491 1 tar.go:312] Adding to tar: /tmp/upload/src/config/database.yml as src/config/database.yml 2018-07-10T02:23:28.805837244Z I0710 02:23:28.805773 1 tar.go:312] Adding to tar: /tmp/upload/src/config.ru as src/config.ru 2018-07-10T02:23:28.806173949Z I0710 02:23:28.806101 1 tar.go:312] Adding to tar: /tmp/upload/src/db as src/db 2018-07-10T02:23:28.806440645Z I0710 02:23:28.806346 1 tar.go:312] Adding to tar: /tmp/upload/src/db/migrate as src/db/migrate 2018-07-10T02:23:28.80659621Z I0710 02:23:28.806535 1 tar.go:312] Adding to tar: /tmp/upload/src/db/migrate/20141102191902_create_key_pair.rb as src/db/migrate/20141102191902_create_key_pair.rb 2018-07-10T02:23:28.806841314Z I0710 02:23:28.806779 1 tar.go:312] Adding to tar: /tmp/upload/src/models.rb as src/models.rb 2018-07-10T02:23:28.807192458Z I0710 02:23:28.807139 1 tar.go:312] Adding to tar: /tmp/upload/src/run.sh as src/run.sh 2018-07-10T02:23:28.807594886Z I0710 02:23:28.807520 1 tar.go:312] Adding to tar: /tmp/upload/src/test as src/test 2018-07-10T02:23:28.807782984Z I0710 02:23:28.807718 1 tar.go:312] Adding to tar: /tmp/upload/src/test/sample_test.rb as src/test/sample_test.rb 2018-07-10T02:23:28.80796374Z I0710 02:23:28.807913 1 tar.go:312] Adding to tar: /tmp/upload/src/views as src/views 2018-07-10T02:23:28.808136702Z I0710 02:23:28.808089 1 tar.go:312] Adding to tar: /tmp/upload/src/views/main.erb as src/views/main.erb 2018-07-10T02:23:28.812180249Z I0710 02:23:28.812106 1 sti.go:681] ---> Installing application source ... 2018-07-10T02:23:28.813609117Z I0710 02:23:28.813531 1 sti.go:681] ---> Building your Ruby application from source ... 2018-07-10T02:23:28.814004558Z I0710 02:23:28.813904 1 sti.go:681] ---> Running 'bundle install --retry 2 --deployment --without development:test' ... 2018-07-10T02:23:31.429093863Z I0710 02:23:31.428973 1 sti.go:681] Fetching gem metadata from https://rubygems.org/.......... 2018-07-10T02:23:31.609758096Z I0710 02:23:31.609654 1 sti.go:681] Installing rake 12.3.0 2018-07-10T02:23:31.741507081Z I0710 02:23:31.741390 1 sti.go:681] Installing concurrent-ruby 1.0.5 2018-07-10T02:23:31.84834377Z I0710 02:23:31.848257 1 sti.go:681] Installing i18n 0.9.3 2018-07-10T02:23:31.937750811Z I0710 02:23:31.937658 1 sti.go:681] Installing minitest 5.11.3 2018-07-10T02:23:32.042143199Z I0710 02:23:32.042064 1 sti.go:681] Installing thread_safe 0.3.6 2018-07-10T02:23:32.220171451Z I0710 02:23:32.220063 1 sti.go:681] Installing tzinfo 1.2.5 2018-07-10T02:23:32.42891076Z I0710 02:23:32.428801 1 sti.go:681] Installing activesupport 5.1.4 2018-07-10T02:23:32.58361543Z I0710 02:23:32.583522 1 sti.go:681] Installing activemodel 5.1.4 2018-07-10T02:23:32.704710987Z I0710 02:23:32.704640 1 sti.go:681] Installing arel 8.0.0 2018-07-10T02:23:32.91909294Z I0710 02:23:32.918945 1 sti.go:681] Installing activerecord 5.1.4 2018-07-10T02:23:33.018208051Z I0710 02:23:33.018108 1 sti.go:681] Installing mustermann 1.0.1 2018-07-10T02:23:40.217473896Z I0710 02:23:40.217364 1 sti.go:681] Installing mysql2 0.4.10 2018-07-10T02:23:40.420911022Z I0710 02:23:40.420830 1 sti.go:681] Installing rack 2.0.4 2018-07-10T02:23:40.519545325Z I0710 02:23:40.519422 1 sti.go:681] Installing rack-protection 2.0.0 2018-07-10T02:23:40.623086242Z I0710 02:23:40.622978 1 sti.go:681] Installing tilt 2.0.8 2018-07-10T02:23:40.772941325Z I0710 02:23:40.772848 1 sti.go:681] Installing sinatra 2.0.0 2018-07-10T02:23:40.84038866Z I0710 02:23:40.840308 1 sti.go:681] Installing sinatra-activerecord 2.0.13 2018-07-10T02:23:40.840752755Z I0710 02:23:40.840709 1 sti.go:681] Using bundler 1.7.8 2018-07-10T02:23:40.840918434Z I0710 02:23:40.840872 1 sti.go:681] Your bundle is complete! 2018-07-10T02:23:40.841791789Z I0710 02:23:40.841748 1 sti.go:681] Gems in the groups development and test were not installed. 2018-07-10T02:23:40.841914728Z I0710 02:23:40.841875 1 sti.go:681] It was installed into ./bundle 2018-07-10T02:23:40.91923436Z I0710 02:23:40.919097 1 sti.go:681] ---> Cleaning up unused ruby gems ... 2018-07-10T02:23:44.264230916Z I0710 02:23:44.264127 1 docker.go:1055] Waiting for container "7044d59fddc02bfd52dfa812204c29a2620086c0cdaf04ead96f8753d8d88fa9" to stop ... 2018-07-10T02:23:44.320114865Z I0710 02:23:44.320001 1 docker.go:1080] Invoking PostExecute function 2018-07-10T02:23:44.320139094Z I0710 02:23:44.320029 1 postexecutorstep.go:67] Skipping step: store previous image 2018-07-10T02:23:44.320145702Z I0710 02:23:44.320039 1 postexecutorstep.go:116] Executing step: commit image 2018-07-10T02:23:44.322496118Z I0710 02:23:44.322430 1 postexecutorstep.go:521] Checking for new Labels to apply... 2018-07-10T02:23:44.322511749Z I0710 02:23:44.322454 1 postexecutorstep.go:529] Creating the download path '/tmp/metadata' 2018-07-10T02:23:44.322638335Z I0710 02:23:44.322586 1 postexecutorstep.go:463] Downloading file "/tmp/.s2i/image_metadata.json" 2018-07-10T02:23:44.396344206Z I0710 02:23:44.396234 1 postexecutorstep.go:537] unable to download and extract 'image_metadata.json' ... continuing 2018-07-10T02:23:44.400614634Z I0710 02:23:44.400523 1 docker.go:1114] Committing container with dockerOpts: {Reference:temp.builder.openshift.io/e2e-test-build-no-outputname-rnrpr/test-sti-1:ecd51cb5 Comment: Author: Changes:[] Pause:false Config:0xc4204ed180}, config: {Hostname: Domainname: User:1001 AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[RACK_ENV=production OPENSHIFT_BUILD_NAME=test-sti-1 OPENSHIFT_BUILD_NAMESPACE=e2e-test-build-no-outputname-rnrpr OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world OPENSHIFT_BUILD_COMMIT=7ccd3242c49c3868195ca9400a539fa611111096 BUILD_LOGLEVEL=5] Cmd:[/usr/libexec/s2i/run] Healthcheck: ArgsEscaped:false Image: Volumes:map[] WorkingDir: Entrypoint:[container-entrypoint] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[io.openshift.build.commit.id:7ccd3242c49c3868195ca9400a539fa611111096 io.openshift.expose-services:8080:http io.k8s.display-name:temp.builder.openshift.io/e2e-test-build-no-outputname-rnrpr/test-sti-1:ecd51cb5 release:1 io.s2i.scripts-url:image:///usr/libexec/s2i name:centos/ruby-22-centos7 com.redhat.component:rh-ruby22-docker io.openshift.build.image:centos/ruby-22-centos7 io.openshift.build.commit.message:Merge pull request #71 from bparees/gemfile2 usage:s2i build https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.4/test/puma-test-app/ centos/ruby-22-centos7 ruby-sample-app io.openshift.build.source-location:https://github.com/openshift/ruby-hello-world io.openshift.build.commit.ref:master io.openshift.build.commit.author:Ben Parees io.openshift.tags:builder,ruby,ruby22 maintainer:SoftwareCollections.org version:2.2 org.label-schema.schema-version:= 1.0 org.label-schema.name=CentOS Base Image org.label-schema.vendor=CentOS org.label-schema.license=GPLv2 org.label-schema.build-date=20180402 io.k8s.description:Ruby 2.2 available as container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible. description:Ruby 2.2 available as container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible. io.openshift.s2i.scripts-url:image:///usr/libexec/s2i io.openshift.build.commit.date:Fri Feb 9 18:24:07 2018 -0500 io.openshift.builder-version:"c159276" summary:Platform for building and running Ruby 2.2 applications] StopSignal: StopTimeout: Shell:[]} 2018-07-10T02:23:45.052241679Z I0710 02:23:45.051950 1 postexecutorstep.go:391] Executing step: report success 2018-07-10T02:23:45.052289616Z I0710 02:23:45.051980 1 postexecutorstep.go:396] Successfully built temp.builder.openshift.io/e2e-test-build-no-outputname-rnrpr/test-sti-1:ecd51cb5 2018-07-10T02:23:45.05229786Z I0710 02:23:45.051990 1 postexecutorstep.go:92] Skipping step: remove previous image 2018-07-10T02:23:45.05230377Z I0710 02:23:45.052033 1 docker.go:991] Removing container "7044d59fddc02bfd52dfa812204c29a2620086c0cdaf04ead96f8753d8d88fa9" ... 2018-07-10T02:23:45.107883226Z I0710 02:23:45.107691 1 docker.go:1001] Removed container "7044d59fddc02bfd52dfa812204c29a2620086c0cdaf04ead96f8753d8d88fa9" 2018-07-10T02:23:45.107908319Z I0710 02:23:45.107830 1 cleanup.go:31] Temporary directory "/tmp" will be saved, not deleted 2018-07-10T02:23:45.217852637Z Build complete, no image push requested [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:26 [AfterEach] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:23:50.815: INFO: namespace : e2e-test-build-no-outputname-rnrpr api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:23:57.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:36.226 seconds] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:12 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:20 building from templates /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:33 should create an image from a S2i template without an output image reference defined [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:51 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs keep the deployer pod invariant valid [Conformance] should deal with cancellation of running deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1240 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:22:56.449: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:22:58.409: INFO: configPath is now "/tmp/e2e-test-cli-deployment-wk7z8-user.kubeconfig" Jul 9 19:22:58.409: INFO: The user is now "e2e-test-cli-deployment-wk7z8-user" Jul 9 19:22:58.409: INFO: Creating project "e2e-test-cli-deployment-wk7z8" Jul 9 19:22:58.546: INFO: Waiting on permissions in project "e2e-test-cli-deployment-wk7z8" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should deal with cancellation of running deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1240 STEP: creating DC STEP: waiting for RC to be created STEP: waiting for deployer pod to be running STEP: canceling the deployment STEP: redeploying immediately by config change [AfterEach] keep the deployer pod invariant valid [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1236 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:23:15.336: INFO: namespace : e2e-test-cli-deployment-wk7z8 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:23:57.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.952 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 keep the deployer pod invariant valid [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1233 should deal with cancellation of running deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1240 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:23:37.570: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:23:39.489: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-c7psx STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 9 19:23:40.263: INFO: Waiting up to 5m0s for pod "pod-4242c449-83e8-11e8-992b-28d244b00276" in namespace "e2e-tests-emptydir-c7psx" to be "success or failure" Jul 9 19:23:40.302: INFO: Pod "pod-4242c449-83e8-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 38.36652ms Jul 9 19:23:42.349: INFO: Pod "pod-4242c449-83e8-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085890633s Jul 9 19:23:44.445: INFO: Pod "pod-4242c449-83e8-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182106095s Jul 9 19:23:46.482: INFO: Pod "pod-4242c449-83e8-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.219190138s STEP: Saw pod success Jul 9 19:23:46.482: INFO: Pod "pod-4242c449-83e8-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:23:46.528: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-4242c449-83e8-11e8-992b-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:23:46.618: INFO: Waiting for pod pod-4242c449-83e8-11e8-992b-28d244b00276 to disappear Jul 9 19:23:46.666: INFO: Pod pod-4242c449-83e8-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:23:46.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-c7psx" for this suite. Jul 9 19:23:52.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:23:54.552: INFO: namespace: e2e-tests-emptydir-c7psx, resource: bindings, ignored listing per whitelist Jul 9 19:23:57.530: INFO: namespace e2e-tests-emptydir-c7psx deletion completed in 10.815915021s • [SLOW TEST:19.960 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:23:57.532: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:23:59.608: INFO: configPath is now "/tmp/e2e-test-router-metrics-pb4z5-user.kubeconfig" Jul 9 19:23:59.608: INFO: The user is now "e2e-test-router-metrics-pb4z5-user" Jul 9 19:23:59.608: INFO: Creating project "e2e-test-router-metrics-pb4z5" Jul 9 19:23:59.704: INFO: Waiting on permissions in project "e2e-test-router-metrics-pb4z5" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:23:59.876: INFO: namespace : e2e-test-router-metrics-pb4z5 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:05.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [8.457 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose the profiling endpoints [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:206 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified new files should be created with FSGroup ownership when container is root [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:23:57.140: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:23:59.347: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-jzvtw STEP: Waiting for a default service account to be provisioned in namespace [It] new files should be created with FSGroup ownership when container is root [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 9 19:24:00.288: INFO: Waiting up to 5m0s for pod "pod-4e32695f-83e8-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-jzvtw" to be "success or failure" Jul 9 19:24:00.341: INFO: Pod "pod-4e32695f-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 52.72666ms Jul 9 19:24:02.401: INFO: Pod "pod-4e32695f-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112683299s Jul 9 19:24:04.441: INFO: Pod "pod-4e32695f-83e8-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152853369s STEP: Saw pod success Jul 9 19:24:04.441: INFO: Pod "pod-4e32695f-83e8-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:24:04.491: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-4e32695f-83e8-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:24:04.613: INFO: Waiting for pod pod-4e32695f-83e8-11e8-881a-28d244b00276 to disappear Jul 9 19:24:04.660: INFO: Pod pod-4e32695f-83e8-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:04.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jzvtw" for this suite. Jul 9 19:24:10.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:24:14.948: INFO: namespace: e2e-tests-emptydir-jzvtw, resource: bindings, ignored listing per whitelist Jul 9 19:24:16.347: INFO: namespace e2e-tests-emptydir-jzvtw deletion completed in 11.627994965s • [SLOW TEST:19.207 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 new files should be created with FSGroup ownership when container is root [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 ------------------------------ [Feature:Builds][Conformance] remove all builds when build configuration is removed oc delete buildconfig should start builds and delete the buildconfig [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:41 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] remove all builds when build configuration is removed /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:24:05.990: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] remove all builds when build configuration is removed /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:24:08.557: INFO: configPath is now "/tmp/e2e-test-cli-remove-build-p7nww-user.kubeconfig" Jul 9 19:24:08.557: INFO: The user is now "e2e-test-cli-remove-build-p7nww-user" Jul 9 19:24:08.557: INFO: Creating project "e2e-test-cli-remove-build-p7nww" Jul 9 19:24:08.728: INFO: Waiting on permissions in project "e2e-test-cli-remove-build-p7nww" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:22 Jul 9 19:24:08.787: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:26 STEP: waiting for builder service account Jul 9 19:24:08.921: INFO: Running 'oc create --config=/tmp/e2e-test-cli-remove-build-p7nww-user.kubeconfig --namespace=e2e-test-cli-remove-build-p7nww -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/builds/test-build.yaml' imagestream.image.openshift.io "origin-ruby-sample" created secret "webhooksecret" created buildconfig.build.openshift.io "sample-build" created buildconfig.build.openshift.io "sample-verbose-build" created buildconfig.build.openshift.io "sample-build-binary" created buildconfig.build.openshift.io "sample-build-github-archive" created buildconfig.build.openshift.io "sample-build-binary-invalidnodeselector" created buildconfig.build.openshift.io "sample-build-docker-args" created buildconfig.build.openshift.io "sample-build-docker-args-preset" created [It] should start builds and delete the buildconfig [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:41 STEP: starting multiple builds Jul 9 19:24:09.681: INFO: Running 'oc start-build --config=/tmp/e2e-test-cli-remove-build-p7nww-user.kubeconfig --namespace=e2e-test-cli-remove-build-p7nww sample-build -o=name' Jul 9 19:24:10.171: INFO: start-build output with args [sample-build -o=name]: Error> StdOut> build/sample-build-1 StdErr> Jul 9 19:24:10.171: INFO: Running 'oc start-build --config=/tmp/e2e-test-cli-remove-build-p7nww-user.kubeconfig --namespace=e2e-test-cli-remove-build-p7nww sample-build -o=name' Jul 9 19:24:10.476: INFO: start-build output with args [sample-build -o=name]: Error> StdOut> build/sample-build-2 StdErr> Jul 9 19:24:10.476: INFO: Running 'oc start-build --config=/tmp/e2e-test-cli-remove-build-p7nww-user.kubeconfig --namespace=e2e-test-cli-remove-build-p7nww sample-build -o=name' Jul 9 19:24:10.821: INFO: start-build output with args [sample-build -o=name]: Error> StdOut> build/sample-build-3 StdErr> Jul 9 19:24:10.821: INFO: Running 'oc start-build --config=/tmp/e2e-test-cli-remove-build-p7nww-user.kubeconfig --namespace=e2e-test-cli-remove-build-p7nww sample-build -o=name' Jul 9 19:24:11.230: INFO: start-build output with args [sample-build -o=name]: Error> StdOut> build/sample-build-4 StdErr> STEP: deleting the buildconfig Jul 9 19:24:11.230: INFO: Running 'oc delete --config=/tmp/e2e-test-cli-remove-build-p7nww-user.kubeconfig --namespace=e2e-test-cli-remove-build-p7nww bc/sample-build' buildconfig.build.openshift.io "sample-build" deleted STEP: waiting for builds to clear Jul 9 19:24:14.518: INFO: Running 'oc get --config=/tmp/e2e-test-cli-remove-build-p7nww-user.kubeconfig --namespace=e2e-test-cli-remove-build-p7nww builds' [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:33 [AfterEach] [Feature:Builds][Conformance] remove all builds when build configuration is removed /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:24:14.905: INFO: namespace : e2e-test-cli-remove-build-p7nww api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] remove all builds when build configuration is removed /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:20.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:15.049 seconds] [Feature:Builds][Conformance] remove all builds when build configuration is removed /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:14 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:21 oc delete buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:40 should start builds and delete the buildconfig [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/remove_buildconfig.go:41 ------------------------------ SSS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:24:16.349: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:24:18.482: INFO: configPath is now "/tmp/e2e-test-router-headers-zpsnr-user.kubeconfig" Jul 9 19:24:18.482: INFO: The user is now "e2e-test-router-headers-zpsnr-user" Jul 9 19:24:18.482: INFO: Creating project "e2e-test-router-headers-zpsnr" Jul 9 19:24:18.603: INFO: Waiting on permissions in project "e2e-test-router-headers-zpsnr" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:30 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:24:18.768: INFO: namespace : e2e-test-router-headers-zpsnr api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:24.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [8.519 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:21 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:41 should set Forwarded headers appropriately [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:42 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:33 ------------------------------ [Feature:DeploymentConfig] deploymentconfigs ignores deployer and lets the config with a NewReplicationControllerCreated reason [Conformance] should let the deployment config with a NewReplicationControllerCreated reason [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1094 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:24:24.871: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:24:27.198: INFO: configPath is now "/tmp/e2e-test-cli-deployment-pphw9-user.kubeconfig" Jul 9 19:24:27.198: INFO: The user is now "e2e-test-cli-deployment-pphw9-user" Jul 9 19:24:27.198: INFO: Creating project "e2e-test-cli-deployment-pphw9" Jul 9 19:24:27.309: INFO: Waiting on permissions in project "e2e-test-cli-deployment-pphw9" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should let the deployment config with a NewReplicationControllerCreated reason [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1094 STEP: verifying that the deployment config is bumped to the first version STEP: verifying that the deployment config has the desired condition and reason [AfterEach] ignores deployer and lets the config with a NewReplicationControllerCreated reason [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1090 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:24:30.367: INFO: namespace : e2e-test-cli-deployment-pphw9 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:36.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:11.620 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 ignores deployer and lets the config with a NewReplicationControllerCreated reason [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1088 should let the deployment config with a NewReplicationControllerCreated reason [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1094 ------------------------------ [job][Conformance] openshift can execute jobs controller should create and run a job in user project [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:20 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [job][Conformance] openshift can execute jobs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:24:36.494: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [job][Conformance] openshift can execute jobs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:24:38.836: INFO: configPath is now "/tmp/e2e-test-job-controller-t8h74-user.kubeconfig" Jul 9 19:24:38.836: INFO: The user is now "e2e-test-job-controller-t8h74-user" Jul 9 19:24:38.836: INFO: Creating project "e2e-test-job-controller-t8h74" Jul 9 19:24:38.989: INFO: Waiting on permissions in project "e2e-test-job-controller-t8h74" ... [It] should create and run a job in user project [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:20 STEP: creating a job from "/tmp/fixture-testdata-dir574852015/test/extended/testdata/jobs/v1.yaml"... Jul 9 19:24:39.019: INFO: Running 'oc create --config=/tmp/e2e-test-job-controller-t8h74-user.kubeconfig --namespace=e2e-test-job-controller-t8h74 -f /tmp/fixture-testdata-dir574852015/test/extended/testdata/jobs/v1.yaml' job.batch "simplev1" created STEP: waiting for a pod... STEP: waiting for a job... STEP: checking job status... STEP: removing a job... Jul 9 19:24:42.409: INFO: Running 'oc delete --config=/tmp/e2e-test-job-controller-t8h74-user.kubeconfig --namespace=e2e-test-job-controller-t8h74 job/simplev1' job.batch "simplev1" deleted [AfterEach] [job][Conformance] openshift can execute jobs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:24:43.019: INFO: namespace : e2e-test-job-controller-t8h74 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [job][Conformance] openshift can execute jobs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:49.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:12.678 seconds] [job][Conformance] openshift can execute jobs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:15 controller /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:19 should create and run a job in user project [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:20 ------------------------------ [Area:Networking] services basic functionality should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] basic functionality /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:24:21.041: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-services1-4hdwx STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 Jul 9 19:24:23.334: INFO: Using ip-10-0-130-54.us-west-2.compute.internal for test ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) Jul 9 19:24:25.500: INFO: Target pod IP:port is 10.2.2.164:8080 Jul 9 19:24:25.816: INFO: Target service IP:port is 10.3.254.189:8080 Jul 9 19:24:25.816: INFO: Creating an exec pod on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:24:25.816: INFO: Creating new exec pod Jul 9 19:24:29.999: INFO: Waiting up to 10s to wget 10.3.254.189:8080 Jul 9 19:24:29.999: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-tests-net-services1-4hdwx execpod-sourceip-ip-10-0-130-54.us-west-2.compute.internal85l24 -- /bin/sh -c wget -T 30 -qO- 10.3.254.189:8080' Jul 9 19:24:30.586: INFO: stderr: "" Jul 9 19:24:30.586: INFO: Cleaning up the exec pod [AfterEach] basic functionality /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:30.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-4hdwx" for this suite. Jul 9 19:24:44.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:24:49.309: INFO: namespace: e2e-tests-net-services1-4hdwx, resource: bindings, ignored listing per whitelist Jul 9 19:24:49.426: INFO: namespace e2e-tests-net-services1-4hdwx deletion completed in 18.671607676s • [SLOW TEST:28.385 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 basic functionality /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:11 should allow connections to another pod on the same node via a service IP [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:14 ------------------------------ SSSS ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with test deployments [Conformance] should run a deployment to completion and then scale to zero [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:316 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:23:35.814: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:23:37.558: INFO: configPath is now "/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig" Jul 9 19:23:37.558: INFO: The user is now "e2e-test-cli-deployment-qkhzd-user" Jul 9 19:23:37.558: INFO: Creating project "e2e-test-cli-deployment-qkhzd" Jul 9 19:23:37.694: INFO: Waiting on permissions in project "e2e-test-cli-deployment-qkhzd" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should run a deployment to completion and then scale to zero [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:316 Jul 9 19:23:41.877: INFO: Running 'oc logs --config=/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig --namespace=e2e-test-cli-deployment-qkhzd -f dc/deployment-test' STEP: checking the logs for substrings --> pre: Running hook pod ... test pre hook executed --> pre: Success --> Scaling deployment-test-1 to 2 --> Success STEP: verifying the deployment is marked complete and scaled to zero Jul 9 19:23:58.800: INFO: Latest rollout of dc/deployment-test (rc/deployment-test-1) is complete. STEP: verifying that scaling does not result in new pods Jul 9 19:23:58.800: INFO: Running 'oc scale --config=/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig --namespace=e2e-test-cli-deployment-qkhzd dc/deployment-test --replicas=1' STEP: ensuring no scale up of the deployment happens STEP: verifying the scale is updated on the deployment config STEP: deploying a few more times Jul 9 19:24:09.454: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig --namespace=e2e-test-cli-deployment-qkhzd latest deployment-test' STEP: waiting for the rollout #2 to finish Jul 9 19:24:12.708: INFO: Running 'oc logs --config=/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig --namespace=e2e-test-cli-deployment-qkhzd -f pods/deployment-test-2-deploy' Jul 9 19:24:25.823: INFO: Latest rollout of dc/deployment-test (rc/deployment-test-2) is complete. STEP: checking the logs for substrings --> pre: Running hook pod ... test pre hook executed --> pre: Success --> Scaling up deployment-test-2 from 0 to 1, scaling down deployment-test-1 from 0 to 0 (keep 1 pods available, don't exceed 2 pods) Scaling deployment-test-2 up to 1 --> Success Jul 9 19:24:25.823: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig --namespace=e2e-test-cli-deployment-qkhzd latest deployment-test' STEP: waiting for the rollout #3 to finish Jul 9 19:24:28.568: INFO: Running 'oc logs --config=/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig --namespace=e2e-test-cli-deployment-qkhzd -f pods/deployment-test-3-deploy' Jul 9 19:24:38.265: INFO: Latest rollout of dc/deployment-test (rc/deployment-test-3) is complete. STEP: checking the logs for substrings --> pre: Running hook pod ... test pre hook executed --> pre: Success --> Scaling up deployment-test-3 from 0 to 1, scaling down deployment-test-2 from 0 to 0 (keep 1 pods available, don't exceed 2 pods) Scaling deployment-test-3 up to 1 --> Success Jul 9 19:24:38.265: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig --namespace=e2e-test-cli-deployment-qkhzd latest deployment-test' STEP: waiting for the rollout #4 to finish Jul 9 19:24:39.828: INFO: Running 'oc logs --config=/tmp/e2e-test-cli-deployment-qkhzd-user.kubeconfig --namespace=e2e-test-cli-deployment-qkhzd -f pods/deployment-test-4-deploy' Jul 9 19:24:51.697: INFO: Latest rollout of dc/deployment-test (rc/deployment-test-4) is complete. STEP: checking the logs for substrings --> pre: Running hook pod ... test pre hook executed --> pre: Success --> Scaling up deployment-test-4 from 0 to 1, scaling down deployment-test-3 from 0 to 0 (keep 1 pods available, don't exceed 2 pods) Scaling deployment-test-4 up to 1 --> Success [AfterEach] with test deployments [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:312 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:24:53.762: INFO: namespace : e2e-test-cli-deployment-qkhzd api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:59.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:84.016 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with test deployments [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:310 should run a deployment to completion and then scale to zero [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:316 ------------------------------ SSS ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with revision history limits [Conformance] should never persist more old deployments than acceptable after being observed by the controller [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:947 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:22:11.424: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:22:13.223: INFO: configPath is now "/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig" Jul 9 19:22:13.223: INFO: The user is now "e2e-test-cli-deployment-n425w-user" Jul 9 19:22:13.223: INFO: Creating project "e2e-test-cli-deployment-n425w" Jul 9 19:22:13.407: INFO: Waiting on permissions in project "e2e-test-cli-deployment-n425w" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should never persist more old deployments than acceptable after being observed by the controller [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:947 Jul 9 19:22:17.425: INFO: Latest rollout of dc/history-limit (rc/history-limit-1) is complete. Jul 9 19:22:17.425: INFO: 00: triggering a new deployment with config change Jul 9 19:22:17.425: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=0' Jul 9 19:22:17.979: INFO: Latest rollout of dc/history-limit (rc/history-limit-1) is complete. Jul 9 19:22:17.979: INFO: 01: triggering a new deployment with config change Jul 9 19:22:17.979: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=1' Jul 9 19:22:39.212: INFO: Latest rollout of dc/history-limit (rc/history-limit-3) is complete. Jul 9 19:22:39.212: INFO: 02: triggering a new deployment with config change Jul 9 19:22:39.212: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=2' Jul 9 19:22:50.090: INFO: Latest rollout of dc/history-limit (rc/history-limit-4) is complete. Jul 9 19:22:50.090: INFO: 03: triggering a new deployment with config change Jul 9 19:22:50.090: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=3' Jul 9 19:22:50.560: INFO: Latest rollout of dc/history-limit (rc/history-limit-4) is complete. Jul 9 19:22:50.560: INFO: 04: triggering a new deployment with config change Jul 9 19:22:50.560: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=4' Jul 9 19:23:20.469: INFO: Latest rollout of dc/history-limit (rc/history-limit-6) is complete. Jul 9 19:23:20.469: INFO: 05: triggering a new deployment with config change Jul 9 19:23:20.469: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=5' Jul 9 19:23:20.891: INFO: Latest rollout of dc/history-limit (rc/history-limit-6) is complete. Jul 9 19:23:20.891: INFO: 06: triggering a new deployment with config change Jul 9 19:23:20.891: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=6' Jul 9 19:23:47.533: INFO: Latest rollout of dc/history-limit (rc/history-limit-8) is complete. Jul 9 19:23:47.533: INFO: 07: triggering a new deployment with config change Jul 9 19:23:47.533: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=7' Jul 9 19:23:48.050: INFO: Latest rollout of dc/history-limit (rc/history-limit-8) is complete. Jul 9 19:23:48.050: INFO: 08: triggering a new deployment with config change Jul 9 19:23:48.050: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=8' Jul 9 19:24:06.266: INFO: Latest rollout of dc/history-limit (rc/history-limit-10) is complete. Jul 9 19:24:06.266: INFO: 09: triggering a new deployment with config change Jul 9 19:24:06.266: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-n425w-user.kubeconfig --namespace=e2e-test-cli-deployment-n425w dc/history-limit A=9' STEP: waiting for the deployment to complete Jul 9 19:24:15.463: INFO: Latest rollout of dc/history-limit (rc/history-limit-11) is complete. [AfterEach] with revision history limits [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:943 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:24:17.890: INFO: namespace : e2e-test-cli-deployment-n425w api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:03.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:172.538 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with revision history limits [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:941 should never persist more old deployments than acceptable after being observed by the controller [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:947 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:24:49.173: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:24:51.238: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-zztz4 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 9 19:24:52.116: INFO: Waiting up to 5m0s for pod "pod-6d167779-83e8-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-zztz4" to be "success or failure" Jul 9 19:24:52.166: INFO: Pod "pod-6d167779-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 49.898283ms Jul 9 19:24:54.212: INFO: Pod "pod-6d167779-83e8-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.096065221s STEP: Saw pod success Jul 9 19:24:54.212: INFO: Pod "pod-6d167779-83e8-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:24:54.252: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-6d167779-83e8-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:24:54.393: INFO: Waiting for pod pod-6d167779-83e8-11e8-881a-28d244b00276 to disappear Jul 9 19:24:54.432: INFO: Pod pod-6d167779-83e8-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:54.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zztz4" for this suite. Jul 9 19:25:00.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:25:04.482: INFO: namespace: e2e-tests-emptydir-zztz4, resource: bindings, ignored listing per whitelist Jul 9 19:25:05.494: INFO: namespace e2e-tests-emptydir-zztz4 deletion completed in 11.01467866s • [SLOW TEST:16.321 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:68 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:24:49.442: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:24:51.451: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-d6whk STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:68 STEP: Creating configMap with name configmap-test-volume-6d2ba492-83e8-11e8-992b-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:24:52.289: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d319e2c-83e8-11e8-992b-28d244b00276" in namespace "e2e-tests-configmap-d6whk" to be "success or failure" Jul 9 19:24:52.329: INFO: Pod "pod-configmaps-6d319e2c-83e8-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.501192ms Jul 9 19:24:54.385: INFO: Pod "pod-configmaps-6d319e2c-83e8-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095965558s Jul 9 19:24:56.425: INFO: Pod "pod-configmaps-6d319e2c-83e8-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136116685s STEP: Saw pod success Jul 9 19:24:56.425: INFO: Pod "pod-configmaps-6d319e2c-83e8-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:24:56.466: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-6d319e2c-83e8-11e8-992b-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:24:56.559: INFO: Waiting for pod pod-configmaps-6d319e2c-83e8-11e8-992b-28d244b00276 to disappear Jul 9 19:24:56.596: INFO: Pod pod-configmaps-6d319e2c-83e8-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:24:56.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-d6whk" for this suite. Jul 9 19:25:02.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:25:04.982: INFO: namespace: e2e-tests-configmap-d6whk, resource: bindings, ignored listing per whitelist Jul 9 19:25:07.029: INFO: namespace e2e-tests-configmap-d6whk deletion completed in 10.389688049s • [SLOW TEST:17.586 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:68 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:25:07.031: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:07.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:07.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [Area:Networking] network isolation /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should allow communication from non-default to default namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:49 Jul 9 19:25:07.031: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:05.498: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:25:07.579: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-node-pools-rh9lm STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/gke_node_pools.go:32 Jul 9 19:25:08.351: INFO: Only supported for providers [gke] (not ) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:08.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-node-pools-rh9lm" for this suite. Jul 9 19:25:14.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:25:17.652: INFO: namespace: e2e-tests-node-pools-rh9lm, resource: bindings, ignored listing per whitelist Jul 9 19:25:19.268: INFO: namespace e2e-tests-node-pools-rh9lm deletion completed in 10.871479565s S [SKIPPING] in Spec Setup (BeforeEach) [13.771 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should create a cluster with multiple node pools [Feature:GKENodePool] [Suite:openshift/conformance/parallel] [Suite:k8s] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/gke_node_pools.go:36 Jul 9 19:25:08.351: Only supported for providers [gke] (not ) /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [Feature:Builds] Multi-stage image builds should succeed [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/multistage.go:47 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds] Multi-stage image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:24:59.832: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds] Multi-stage image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:25:01.602: INFO: configPath is now "/tmp/e2e-test-build-multistage-6nv4h-user.kubeconfig" Jul 9 19:25:01.602: INFO: The user is now "e2e-test-build-multistage-6nv4h-user" Jul 9 19:25:01.602: INFO: Creating project "e2e-test-build-multistage-6nv4h" Jul 9 19:25:01.819: INFO: Waiting on permissions in project "e2e-test-build-multistage-6nv4h" ... [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/multistage.go:33 STEP: waiting for builder service account [It] should succeed [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/multistage.go:47 STEP: creating a build directly Jul 9 19:25:02.138: INFO: Waiting for multi-stage to complete Jul 9 19:25:13.218: INFO: Done waiting for multi-stage: util.BuildResult{BuildPath:"builds/multi-stage", BuildName:"multi-stage", StartBuildStdErr:"", StartBuildStdOut:"", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421e52600), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4210490e0)} with error: Jul 9 19:25:13.268: INFO: Running 'oc logs --config=/tmp/e2e-test-build-multistage-6nv4h-user.kubeconfig --namespace=e2e-test-build-multistage-6nv4h -f builds/multi-stage --timestamps' Jul 9 19:25:13.606: INFO: Build logs: &{builds/multi-stage multi-stage %!s(*build.Build=&{{ } {multi-stage e2e-test-build-multistage-6nv4h /apis/build.openshift.io/v1/namespaces/e2e-test-build-multistage-6nv4h/builds/multi-stage 73135c69-83e8-11e8-aa51-0af96768d57e 83822 0 {{0 63666786302 0x6b11480}} map[] map[openshift.io/build.pod-name:multi-stage-build] [] [] } {{ { 0xc4213202d0 [{{DockerImage centos:7 } [scratch] [] }] []} {0xc420499f80 } {0xc420400000 0xc4213203d0 []} {map[] map[]} {[] [] } map[]} []} {Complete false 0xc42210b860 0xc42210b8c0 6000000000 docker-registry.default.svc:5000/e2e-test-build-multistage-6nv4h/multi-stage:v1 {0xc421320310} [{Build {{0 63666786305 0x6b11480}} 1119 [{DockerBuild {{0 63666786305 0x6b11480}} 1119}]} {PushImage {{0 63666786306 0x6b11480}} 494 [{PushDockerImage {{0 63666786306 0x6b11480}} 494}]}] }}) %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(bool=false) %!s(bool=false) %!s(util.LogDumperFunc=) %!s(*util.CLI=&{oc /tmp/e2e-test-build-multistage-6nv4h-user.kubeconfig /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig e2e-test-build-multistage-6nv4h-user [] [] [] [e2e-test-build-multistage-6nv4h] false false 0xc421317cc0})} Jul 9 19:25:13.650: INFO: Waiting up to 5m0s for pod "test" in namespace "e2e-test-build-multistage-6nv4h" to be "success or failure" Jul 9 19:25:13.692: INFO: Pod "test": Phase="Pending", Reason="", readiness=false. Elapsed: 42.629278ms Jul 9 19:25:15.723: INFO: Pod "test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.072974037s Jul 9 19:25:15.723: INFO: Pod "test" satisfied condition "success or failure" Jul 9 19:25:15.757: INFO: Pod logs: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 4020 0 4020 0 0 24331 0 --:--:-- --:--:-- --:--:-- 24363 { "paths": [ "/api", "/api/v1", "/apis", "/apis/", "/apis/admissionregistration.k8s.io", "/apis/admissionregistration.k8s.io/v1beta1", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/apis/apiregistration.k8s.io", "/apis/apiregistration.k8s.io/v1", "/apis/apiregistration.k8s.io/v1beta1", "/apis/apps", "/apis/apps.openshift.io", "/apis/apps.openshift.io/v1", "/apis/apps/v1", "/apis/apps/v1beta1", "/apis/apps/v1beta2", "/apis/authentication.k8s.io", "/apis/authentication.k8s.io/v1", "/apis/authentication.k8s.io/v1beta1", "/apis/authorization.k8s.io", "/apis/authorization.k8s.io/v1", "/apis/authorization.k8s.io/v1beta1", "/apis/authorization.openshift.io", "/apis/authorization.openshift.io/v1", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/autoscaling/v2beta1", "/apis/batch", "/apis/batch/v1", "/apis/batch/v1beta1", "/apis/build.openshift.io", "/apis/build.openshift.io/v1", "/apis/certificates.k8s.io", "/apis/certificates.k8s.io/v1beta1", "/apis/events.k8s.io", "/apis/events.k8s.io/v1beta1", "/apis/extensions", "/apis/extensions/v1beta1", "/apis/image.openshift.io", "/apis/image.openshift.io/v1", "/apis/kvo.coreos.com", "/apis/kvo.coreos.com/v1", "/apis/metrics.k8s.io", "/apis/metrics.k8s.io/v1beta1", "/apis/multicluster.coreos.com", "/apis/multicluster.coreos.com/v1", "/apis/ncg.coreos.com", "/apis/ncg.coreos.com/v1beta1", "/apis/network.openshift.io", "/apis/network.openshift.io/v1", "/apis/networking.k8s.io", "/apis/networking.k8s.io/v1", "/apis/oauth.openshift.io", "/apis/oauth.openshift.io/v1", "/apis/policy", "/apis/policy/v1beta1", "/apis/project.openshift.io", "/apis/project.openshift.io/v1", "/apis/quota.openshift.io", "/apis/quota.openshift.io/v1", "/apis/rbac.authorization.k8s.io", "/apis/rbac.authorization.k8s.io/v1", "/apis/rbac.authorization.k8s.io/v1beta1", "/apis/route.openshift.io", "/apis/route.openshift.io/v1", "/apis/scheduling.k8s.io", "/apis/scheduling.k8s.io/v1beta1", "/apis/security.openshift.io", "/apis/security.openshift.io/v1", "/apis/storage.k8s.io", "/apis/storage.k8s.io/v1", "/apis/storage.k8s.io/v1beta1", "/apis/tco.coreos.com", "/apis/tco.coreos.com/v1", "/apis/template.openshift.io", "/apis/template.openshift.io/v1", "/apis/user.openshift.io", "/apis/user.openshift.io/v1", "/healthz", "/healthz/autoregister-completion", "/healthz/etcd", "/healthz/log", "/healthz/ping", "/healthz/poststarthook/apiservice-openapi-controller", "/healthz/poststarthook/apiservice-registration-controller", "/healthz/poststarthook/apiservice-status-available-controller", "/healthz/poststarthook/authorization.openshift.io-bootstrapclusterroles", "/healthz/poststarthook/authorization.openshift.io-ensureopenshift-infra", "/healthz/poststarthook/bootstrap-controller", "/healthz/poststarthook/ca-registration", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/kube-apiserver-autoregistration", "/healthz/poststarthook/oauth.openshift.io-StartOAuthClientsBootstrapping", "/healthz/poststarthook/openshift.io-RESTMapper", "/healthz/poststarthook/openshift.io-StartInformers", "/healthz/poststarthook/quota.openshift.io-clusterquotamapping", "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/healthz/poststarthook/start-kube-aggregator-informers", "/metrics", "/oapi", "/openapi/v2", "/swagger-2.0.0.json", "/swagger-2.0.0.pb-v1", "/swagger-2.0.0.pb-v1.gz", "/swagger.json", "/swaggerapi", "/version", "/version/openshift" ] } [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/multistage.go:40 [AfterEach] [Feature:Builds] Multi-stage image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:25:15.821: INFO: namespace : e2e-test-build-multistage-6nv4h api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds] Multi-stage image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:21.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:22.063 seconds] [Feature:Builds] Multi-stage image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/multistage.go:19 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/multistage.go:31 should succeed [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/multistage.go:47 ------------------------------ S ------------------------------ [sig-storage] Projected should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:19.270: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:25:21.280: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-lg9fx STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:25:22.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7efd72ba-83e8-11e8-881a-28d244b00276" in namespace "e2e-tests-projected-lg9fx" to be "success or failure" Jul 9 19:25:22.191: INFO: Pod "downwardapi-volume-7efd72ba-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 41.125326ms Jul 9 19:25:24.234: INFO: Pod "downwardapi-volume-7efd72ba-83e8-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.084018882s STEP: Saw pod success Jul 9 19:25:24.234: INFO: Pod "downwardapi-volume-7efd72ba-83e8-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:25:24.275: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-7efd72ba-83e8-11e8-881a-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:25:24.366: INFO: Waiting for pod downwardapi-volume-7efd72ba-83e8-11e8-881a-28d244b00276 to disappear Jul 9 19:25:24.410: INFO: Pod downwardapi-volume-7efd72ba-83e8-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:24.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lg9fx" for this suite. Jul 9 19:25:30.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:25:33.066: INFO: namespace: e2e-tests-projected-lg9fx, resource: bindings, ignored listing per whitelist Jul 9 19:25:35.286: INFO: namespace e2e-tests-projected-lg9fx deletion completed in 10.823441243s • [SLOW TEST:16.017 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide container's memory limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] Projected should set DefaultMode on files [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:21.897: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:25:23.401: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-8tgct STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should set DefaultMode on files [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:25:24.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276" in namespace "e2e-tests-projected-8tgct" to be "success or failure" Jul 9 19:25:24.164: INFO: Pod "downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.998629ms Jul 9 19:25:26.194: INFO: Pod "downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063255683s STEP: Saw pod success Jul 9 19:25:26.194: INFO: Pod "downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:25:26.230: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:25:26.307: INFO: Waiting for pod downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276 to disappear Jul 9 19:25:26.337: INFO: Pod downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:26.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8tgct" for this suite. Jul 9 19:25:32.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:25:35.309: INFO: namespace: e2e-tests-projected-8tgct, resource: bindings, ignored listing per whitelist Jul 9 19:25:35.858: INFO: namespace e2e-tests-projected-8tgct deletion completed in 9.489501649s • [SLOW TEST:13.961 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should set DefaultMode on files [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Conformance][templates] templateservicebroker end-to-end test should pass an end-to-end test [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:367 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateservicebroker end-to-end test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:35.288: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateservicebroker end-to-end test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:25:37.488: INFO: configPath is now "/tmp/e2e-test-templates-prx6s-user.kubeconfig" Jul 9 19:25:37.488: INFO: The user is now "e2e-test-templates-prx6s-user" Jul 9 19:25:37.488: INFO: Creating project "e2e-test-templates-prx6s" Jul 9 19:25:37.663: INFO: Waiting on permissions in project "e2e-test-templates-prx6s" ... [BeforeEach] [Conformance][templates] templateservicebroker end-to-end test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:56 [AfterEach] [Conformance][templates] templateservicebroker end-to-end test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:25:37.961: INFO: namespace : e2e-test-templates-prx6s api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateservicebroker end-to-end test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:25:44.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][templates] templateservicebroker end-to-end test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:104 • Failure in Spec Setup (BeforeEach) [8.868 seconds] [Conformance][templates] templateservicebroker end-to-end test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:39 [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:353 should pass an end-to-end test [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:367 Expected error: <*errors.StatusError | 0xc4217ba7e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "services \"apiserver\" not found", Reason: "NotFound", Details: {Name: "apiserver", Group: "", Kind: "services", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } services "apiserver" not found not to have occurred /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:63 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:35.860: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:25:37.521: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-qnzmb STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 9 19:25:38.227: INFO: Waiting up to 5m0s for pod "pod-88936e8f-83e8-11e8-bd2e-28d244b00276" in namespace "e2e-tests-emptydir-qnzmb" to be "success or failure" Jul 9 19:25:38.255: INFO: Pod "pod-88936e8f-83e8-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 28.240958ms Jul 9 19:25:40.285: INFO: Pod "pod-88936e8f-83e8-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057852081s STEP: Saw pod success Jul 9 19:25:40.285: INFO: Pod "pod-88936e8f-83e8-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:25:40.323: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-88936e8f-83e8-11e8-bd2e-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:25:40.400: INFO: Waiting for pod pod-88936e8f-83e8-11e8-bd2e-28d244b00276 to disappear Jul 9 19:25:40.430: INFO: Pod pod-88936e8f-83e8-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:40.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qnzmb" for this suite. Jul 9 19:25:46.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:25:48.230: INFO: namespace: e2e-tests-emptydir-qnzmb, resource: bindings, ignored listing per whitelist Jul 9 19:25:50.192: INFO: namespace e2e-tests-emptydir-qnzmb deletion completed in 9.728805842s • [SLOW TEST:14.332 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:444 Jul 9 19:25:50.438: INFO: Could not check network plugin name: exit status 1. Assuming a non-OpenShift plugin Jul 9 19:25:50.438: INFO: Not using one of the specified plugins [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 [AfterEach] when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:50.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.246 seconds] [Area:Networking] multicast /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:442 should block multicast traffic in namespaces where it is disabled [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:42 Jul 9 19:25:50.438: Not using one of the specified plugins /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:184 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:03.963: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:25:05.477: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-sysctl-4742m STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:56 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:184 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Collecting events from namespace "e2e-tests-sysctl-4742m". STEP: Found 5 events. Jul 9 19:25:10.176: INFO: At 2018-07-09 19:25:06 -0700 PDT - event for sysctl-756e41b3-83e8-11e8-8401-28d244b00276: {default-scheduler } Scheduled: Successfully assigned e2e-tests-sysctl-4742m/sysctl-756e41b3-83e8-11e8-8401-28d244b00276 to ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:10.176: INFO: At 2018-07-09 19:25:06 -0700 PDT - event for sysctl-756e41b3-83e8-11e8-8401-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulling: pulling image "busybox" Jul 9 19:25:10.176: INFO: At 2018-07-09 19:25:08 -0700 PDT - event for sysctl-756e41b3-83e8-11e8-8401-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Successfully pulled image "busybox" Jul 9 19:25:10.176: INFO: At 2018-07-09 19:25:08 -0700 PDT - event for sysctl-756e41b3-83e8-11e8-8401-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:25:10.176: INFO: At 2018-07-09 19:25:08 -0700 PDT - event for sysctl-756e41b3-83e8-11e8-8401-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:25:10.307: INFO: POD NODE PHASE GRACE CONDITIONS Jul 9 19:25:10.307: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:10.307: INFO: multi-stage-build ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:05 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:08 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:02 -0700 PDT }] Jul 9 19:25:10.307: INFO: execpodts6g4 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:24:14 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:24:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:24:14 -0700 PDT }] Jul 9 19:25:10.307: INFO: dns-test-776b08cb-83e8-11e8-992b-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:09 -0700 PDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:09 -0700 PDT ContainersNotReady containers with unready status: [querier]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [querier]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:09 -0700 PDT }] Jul 9 19:25:10.307: INFO: sysctl-756e41b3-83e8-11e8-8401-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:06 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:06 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:06 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:25:10.307: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }] Jul 9 19:25:10.308: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:25:10.308: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:10.308: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:10.308: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:25:10.308: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:10.308: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:25:10.308: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }] Jul 9 19:25:10.308: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:10.308: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:10.308: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:10.308: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:25:10.308: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:10.308: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:25:10.308: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }] Jul 9 19:25:10.309: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:25:10.309: INFO: Jul 9 19:25:10.342: INFO: Logging node info for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:10.373: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:83816,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365150208 0} {} 8169092Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260292608 0} {} 8066692Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:25:08 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:25:08 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:25:08 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:25:08 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:25:08 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test@sha256:ee11e7c7dbb2d609aaa42c8806ef1bf5663df95dd925e6ab424b4439dbaf75fd docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test:latest] 613134548} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:91955c14f978a0f48918eecc8b3772faf1615e943daccf9bb051a51cba30422f] 465041680} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-r2mt4/image1@sha256:16dcb9524a1c672adffa862499aabbb0d97d5c996120b2934c1ab382355ec4ea docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-r2mt4/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:25:10.374: INFO: Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:10.402: INFO: Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:10.487: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container webconsole ready: true, restart count 0 Jul 9 19:25:10.487: INFO: dns-test-776b08cb-83e8-11e8-992b-28d244b00276 started at 2018-07-09 19:25:09 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container querier ready: false, restart count 0 Jul 9 19:25:10.487: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container node-agent ready: true, restart count 3 Jul 9 19:25:10.487: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container directory-sync ready: true, restart count 0 Jul 9 19:25:10.487: INFO: multi-stage-build started at 2018-07-09 19:25:02 -0700 PDT (2+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Init container extract-image-content ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Init container manage-dockerfile ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Container docker-build ready: false, restart count 0 Jul 9 19:25:10.487: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container default-http-backend ready: true, restart count 0 Jul 9 19:25:10.487: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container router ready: true, restart count 0 Jul 9 19:25:10.487: INFO: sysctl-756e41b3-83e8-11e8-8401-28d244b00276 started at 2018-07-09 19:25:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container test-container ready: false, restart count 0 Jul 9 19:25:10.487: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded) Jul 9 19:25:10.487: INFO: Container alert-buffer ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Container alertmanager ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Container alertmanager-proxy ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Container alerts-proxy ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Container prom-proxy ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Container prometheus ready: true, restart count 0 Jul 9 19:25:10.487: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:25:10.487: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container registry ready: true, restart count 0 Jul 9 19:25:10.487: INFO: execpodts6g4 started at 2018-07-09 19:24:14 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:10.487: INFO: Container exec ready: true, restart count 0 Jul 9 19:25:10.487: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:25:10.487: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:25:10.487: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:25:10.487: INFO: Container metrics-server ready: true, restart count 0 Jul 9 19:25:10.487: INFO: Container metrics-server-nanny ready: true, restart count 0 W0709 19:25:10.528685 11714 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:25:10.740: INFO: Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:10.740: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.293292s} Jul 9 19:25:10.740: INFO: Logging node info for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:25:10.795: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:83634,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:25:01 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:25:01 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:25:01 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:25:01 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:25:01 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:25:10.795: INFO: Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:25:10.833: INFO: Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:25:40.862: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250) Jul 9 19:25:40.862: INFO: Logging node info for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:25:40.895: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:84225,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:25:40 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:25:40 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:25:40 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:25:40 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:25:40 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:25:40.895: INFO: Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:25:40.928: INFO: Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:25:41.033: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.033: INFO: Container kube-apiserver ready: true, restart count 4 Jul 9 19:25:41.033: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.033: INFO: Container tectonic-node-controller ready: true, restart count 0 Jul 9 19:25:41.033: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.033: INFO: Container tectonic-alm-operator ready: true, restart count 0 Jul 9 19:25:41.033: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.033: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0 Jul 9 19:25:41.033: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.033: INFO: Container node-agent ready: true, restart count 4 Jul 9 19:25:41.033: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.033: INFO: Container tectonic-utility-operator ready: true, restart count 0 Jul 9 19:25:41.033: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.033: INFO: Container pod-checkpointer ready: true, restart count 0 Jul 9 19:25:41.033: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded) Jul 9 19:25:41.033: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:25:41.033: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:25:41.033: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container openshift-controller-manager ready: true, restart count 3 Jul 9 19:25:41.034: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container tectonic-node-controller-operator ready: true, restart count 0 Jul 9 19:25:41.034: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container kube-core-operator ready: true, restart count 0 Jul 9 19:25:41.034: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container kube-addon-operator ready: true, restart count 0 Jul 9 19:25:41.034: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at (0+0 container statuses recorded) Jul 9 19:25:41.034: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container kube-controller-manager ready: true, restart count 1 Jul 9 19:25:41.034: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container kube-scheduler ready: true, restart count 0 Jul 9 19:25:41.034: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container tectonic-clu ready: true, restart count 0 Jul 9 19:25:41.034: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded) Jul 9 19:25:41.034: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0 Jul 9 19:25:41.034: INFO: Container tectonic-stats-emitter ready: true, restart count 0 Jul 9 19:25:41.034: INFO: Container tectonic-stats-extender ready: true, restart count 0 Jul 9 19:25:41.034: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container tectonic-channel-operator ready: true, restart count 0 Jul 9 19:25:41.034: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:25:41.034: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container openshift-apiserver ready: true, restart count 0 Jul 9 19:25:41.034: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:41.034: INFO: Container tectonic-network-operator ready: true, restart count 0 Jul 9 19:25:41.034: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded) Jul 9 19:25:41.034: INFO: Container dnsmasq ready: true, restart count 0 Jul 9 19:25:41.034: INFO: Container kubedns ready: true, restart count 0 Jul 9 19:25:41.034: INFO: Container sidecar ready: true, restart count 0 W0709 19:25:41.085957 11714 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:25:41.173: INFO: Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal STEP: Dumping a list of prepulled images on each node... Jul 9 19:25:41.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sysctl-4742m" for this suite. Jul 9 19:25:47.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:25:50.338: INFO: namespace: e2e-tests-sysctl-4742m, resource: bindings, ignored listing per whitelist Jul 9 19:25:51.219: INFO: namespace e2e-tests-sysctl-4742m deletion completed in 9.58853252s • Failure [47.256 seconds] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should not launch unsafe, but not explicitly enabled sysctls on the node [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:184 Expected <*v1.Event | 0x0>: nil not to be nil /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:207 ------------------------------ [sig-storage] Projected should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:44.157: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:25:46.184: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-s9zfd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name projected-configmap-test-volume-8dc8f544-83e8-11e8-881a-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:25:47.107: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8dd40555-83e8-11e8-881a-28d244b00276" in namespace "e2e-tests-projected-s9zfd" to be "success or failure" Jul 9 19:25:47.157: INFO: Pod "pod-projected-configmaps-8dd40555-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 50.419119ms Jul 9 19:25:49.201: INFO: Pod "pod-projected-configmaps-8dd40555-83e8-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093801976s STEP: Saw pod success Jul 9 19:25:49.201: INFO: Pod "pod-projected-configmaps-8dd40555-83e8-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:25:49.267: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-8dd40555-83e8-11e8-881a-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:25:49.376: INFO: Waiting for pod pod-projected-configmaps-8dd40555-83e8-11e8-881a-28d244b00276 to disappear Jul 9 19:25:49.416: INFO: Pod pod-projected-configmaps-8dd40555-83e8-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:49.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s9zfd" for this suite. Jul 9 19:25:55.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:26:00.689: INFO: namespace: e2e-tests-projected-s9zfd, resource: bindings, ignored listing per whitelist Jul 9 19:26:01.043: INFO: namespace e2e-tests-projected-s9zfd deletion completed in 11.577940392s • [SLOW TEST:16.885 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] Projected should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:51.222: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:25:52.792: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-rl6f9 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating projection with secret that has name projected-secret-test-91ae2b4a-83e8-11e8-8401-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:25:53.551: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91b43db2-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-rl6f9" to be "success or failure" Jul 9 19:25:53.580: INFO: Pod "pod-projected-secrets-91b43db2-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.325217ms Jul 9 19:25:55.609: INFO: Pod "pod-projected-secrets-91b43db2-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057917823s STEP: Saw pod success Jul 9 19:25:55.609: INFO: Pod "pod-projected-secrets-91b43db2-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:25:55.639: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-91b43db2-83e8-11e8-8401-28d244b00276 container projected-secret-volume-test: STEP: delete the pod Jul 9 19:25:55.737: INFO: Waiting for pod pod-projected-secrets-91b43db2-83e8-11e8-8401-28d244b00276 to disappear Jul 9 19:25:55.766: INFO: Pod pod-projected-secrets-91b43db2-83e8-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:25:55.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rl6f9" for this suite. Jul 9 19:26:01.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:26:04.687: INFO: namespace: e2e-tests-projected-rl6f9, resource: bindings, ignored listing per whitelist Jul 9 19:26:05.474: INFO: namespace e2e-tests-projected-rl6f9 deletion completed in 9.676280144s • [SLOW TEST:14.252 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ DNS should answer endpoint and wildcard queries for the cluster [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/dns/dns.go:298 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] DNS /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:07.033: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-dns-fvkds STEP: Waiting for a default service account to be provisioned in namespace [It] should answer endpoint and wildcard queries for the cluster [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/dns/dns.go:298 STEP: Running these commands:for i in `seq 1 10`; do test -n "$$(dig +notcp +noall +answer +search prefix.kubernetes.default A)" && echo "test_udp@prefix.kubernetes.default";test -n "$$(dig +tcp +noall +answer +search prefix.kubernetes.default A)" && echo "test_tcp@prefix.kubernetes.default";test -n "$$(dig +notcp +noall +answer +search prefix.kubernetes.default.svc A)" && echo "test_udp@prefix.kubernetes.default.svc";test -n "$$(dig +tcp +noall +answer +search prefix.kubernetes.default.svc A)" && echo "test_tcp@prefix.kubernetes.default.svc";test -n "$$(dig +notcp +noall +answer +search prefix.kubernetes.default.svc.cluster.local A)" && echo "test_udp@prefix.kubernetes.default.svc.cluster.local";test -n "$$(dig +tcp +noall +answer +search prefix.kubernetes.default.svc.cluster.local A)" && echo "test_tcp@prefix.kubernetes.default.svc.cluster.local";test -n "$$(dig +notcp +noall +answer +search prefix.clusterip.e2e-tests-dns-fvkds A)" && echo "test_udp@prefix.clusterip.e2e-tests-dns-fvkds";test -n "$$(dig +tcp +noall +answer +search prefix.clusterip.e2e-tests-dns-fvkds A)" && echo "test_tcp@prefix.clusterip.e2e-tests-dns-fvkds"; test -n "$$(dig +notcp +noall +additional +search _http._tcp.externalname.e2e-tests-dns-fvkds.svc SRV)" && echo "test_udp@_http._tcp.externalname.e2e-tests-dns-fvkds.svc";test -n "$$(dig +tcp +noall +additional +search _http._tcp.externalname.e2e-tests-dns-fvkds.svc SRV)" && echo "test_tcp@_http._tcp.externalname.e2e-tests-dns-fvkds.svc"; test -n "$$(dig +notcp +noall +answer +search externalname.e2e-tests-dns-fvkds.svc CNAME)" && echo "test_udp@externalname.e2e-tests-dns-fvkds.svc";test -n "$$(dig +tcp +noall +answer +search externalname.e2e-tests-dns-fvkds.svc CNAME)" && echo "test_tcp@externalname.e2e-tests-dns-fvkds.svc"; [ "$$(dig +short +notcp +noall +answer +search headless.e2e-tests-dns-fvkds.svc A | sort | xargs echo)" = "1.1.1.1 1.1.1.2" ] && echo "test_endpoints@headless.e2e-tests-dns-fvkds.svc";[ "$$(dig +short +notcp +noall +answer +search headless.e2e-tests-dns-fvkds.endpoints A | sort | xargs echo)" = "1.1.1.1 1.1.1.2" ] && echo "test_endpoints@headless.e2e-tests-dns-fvkds.endpoints";[ "$$(dig +short +notcp +noall +answer +search clusterip.e2e-tests-dns-fvkds.endpoints A | sort | xargs echo)" = "1.1.1.1 1.1.1.2" ] && echo "test_endpoints@clusterip.e2e-tests-dns-fvkds.endpoints";[ "$$(dig +short +notcp +noall +answer +search endpoint1.headless.e2e-tests-dns-fvkds.endpoints A | sort | xargs echo)" = "1.1.1.1" ] && echo "test_endpoints@endpoint1.headless.e2e-tests-dns-fvkds.endpoints";[ "$$(dig +short +notcp +noall +answer +search endpoint1.clusterip.e2e-tests-dns-fvkds.endpoints A | sort | xargs echo)" = "1.1.1.1" ] && echo "test_endpoints@endpoint1.clusterip.e2e-tests-dns-fvkds.endpoints";[ "$$(dig +short +notcp +noall +answer +search kubernetes.default.endpoints A | sort | xargs echo)" = "10.0.35.213" ] && echo "test_endpoints@kubernetes.default.endpoints"; [ "$(dig +short +notcp +noall +answer +search 2.1.1.1.in-addr.arpa PTR)" = "" ] && echo "test_ptr@1.1.1.2";[ "$(dig +short +notcp +noall +answer +search 1.1.1.2.in-addr.arpa PTR)" = "" ] && echo "test_ptr@2.1.1.1";[ "$(dig +short +notcp +noall +answer +search 1.1.1.1.in-addr.arpa PTR)" = "endpoint1.headless.e2e-tests-dns-fvkds.svc.cluster.local." ] && echo "test_ptr@1.1.1.1"; podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fvkds.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo "test_udp@PodARecord";test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo "test_tcp@PodARecord";sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod logs STEP: looking for the results for each expected name from probiers Jul 9 19:25:25.677: INFO: Got results from pod: test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord test_udp@externalname.e2e-tests-dns-fvkds.svc test_tcp@externalname.e2e-tests-dns-fvkds.svc test_endpoints@headless.e2e-tests-dns-fvkds.svc test_ptr@1.1.1.2 test_ptr@2.1.1.1 test_ptr@1.1.1.1 test_udp@PodARecord test_tcp@PodARecord Jul 9 19:25:25.677: INFO: Unexpected results: unexpected count 0/10 for "test_udp@prefix.kubernetes.default": map[] STEP: deleting the pod [AfterEach] DNS /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Collecting events from namespace "e2e-tests-dns-fvkds". STEP: Found 5 events. Jul 9 19:25:25.781: INFO: At 2018-07-09 19:25:09 -0700 PDT - event for dns-test-776b08cb-83e8-11e8-992b-28d244b00276: {default-scheduler } Scheduled: Successfully assigned e2e-tests-dns-fvkds/dns-test-776b08cb-83e8-11e8-992b-28d244b00276 to ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:25.781: INFO: At 2018-07-09 19:25:10 -0700 PDT - event for dns-test-776b08cb-83e8-11e8-992b-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Container image "gcr.io/google_containers/dnsutils:e2e" already present on machine Jul 9 19:25:25.781: INFO: At 2018-07-09 19:25:10 -0700 PDT - event for dns-test-776b08cb-83e8-11e8-992b-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:25:25.781: INFO: At 2018-07-09 19:25:10 -0700 PDT - event for dns-test-776b08cb-83e8-11e8-992b-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:25:25.781: INFO: At 2018-07-09 19:25:25 -0700 PDT - event for dns-test-776b08cb-83e8-11e8-992b-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Killing: Killing container with id docker://querier:Need to kill Pod Jul 9 19:25:25.950: INFO: POD NODE PHASE GRACE CONDITIONS Jul 9 19:25:25.950: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: execpodts6g4 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:24:14 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:24:16 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:24:14 -0700 PDT }] Jul 9 19:25:25.950: INFO: downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:24 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:24 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:24 -0700 PDT }] Jul 9 19:25:25.950: INFO: sysctl-756e41b3-83e8-11e8-8401-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:06 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:06 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:25:06 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:25.950: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:25.950: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:25:25.950: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }] Jul 9 19:25:25.950: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:25:25.950: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:25:25.950: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }] Jul 9 19:25:25.950: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:25:25.950: INFO: Jul 9 19:25:26.007: INFO: Logging node info for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:26.049: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:83921,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365150208 0} {} 8169092Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260292608 0} {} 8066692Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:25:18 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:25:18 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:25:18 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:25:18 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:25:18 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test@sha256:ee11e7c7dbb2d609aaa42c8806ef1bf5663df95dd925e6ab424b4439dbaf75fd docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test:latest] 613134548} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:91955c14f978a0f48918eecc8b3772faf1615e943daccf9bb051a51cba30422f] 465041680} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[docker-registry.default.svc:5000/e2e-test-build-multistage-6nv4h/multi-stage@sha256:fae21b55071abd175d4207707eccd5b5aedf3e20e34714cba2ccfacfd394587a docker-registry.default.svc:5000/e2e-test-build-multistage-6nv4h/multi-stage:v1] 199835207} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-r2mt4/image1@sha256:16dcb9524a1c672adffa862499aabbb0d97d5c996120b2934c1ab382355ec4ea docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-r2mt4/image1:latest] 199678471} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/directory-sync@sha256:e5e7fe901868853d89c2c0697cc88f0686c6ba1178ca045ec57bfd18e7000048 quay.io/coreos/directory-sync:v0.0.2] 38433928} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[k8s.gcr.io/addon-resizer@sha256:d00afd42fc267fa3275a541083cfe67d160f966c788174b44597434760e1e1eb k8s.gcr.io/addon-resizer:2.1] 26450138} {[quay.io/coreos/tectonic-error-server@sha256:aefa0a012e103bee299c17e798e5830128588b6ef5d4d1f6bc8ae5804bc4d8cd quay.io/coreos/tectonic-error-server:1.1] 12714516}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:25:26.049: INFO: Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:26.089: INFO: Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:26.171: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container default-http-backend ready: true, restart count 0 Jul 9 19:25:26.171: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container router ready: true, restart count 0 Jul 9 19:25:26.171: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:25:26.171: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container registry ready: true, restart count 0 Jul 9 19:25:26.171: INFO: sysctl-756e41b3-83e8-11e8-8401-28d244b00276 started at 2018-07-09 19:25:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container test-container ready: false, restart count 0 Jul 9 19:25:26.171: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded) Jul 9 19:25:26.171: INFO: Container alert-buffer ready: true, restart count 0 Jul 9 19:25:26.171: INFO: Container alertmanager ready: true, restart count 0 Jul 9 19:25:26.171: INFO: Container alertmanager-proxy ready: true, restart count 0 Jul 9 19:25:26.171: INFO: Container alerts-proxy ready: true, restart count 0 Jul 9 19:25:26.171: INFO: Container prom-proxy ready: true, restart count 0 Jul 9 19:25:26.171: INFO: Container prometheus ready: true, restart count 0 Jul 9 19:25:26.171: INFO: downwardapi-volume-802dace2-83e8-11e8-bd2e-28d244b00276 started at 2018-07-09 19:25:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container client-container ready: false, restart count 0 Jul 9 19:25:26.171: INFO: execpodts6g4 started at 2018-07-09 19:24:14 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container exec ready: true, restart count 0 Jul 9 19:25:26.171: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:25:26.171: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:25:26.171: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:25:26.171: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:25:26.171: INFO: Container metrics-server ready: true, restart count 0 Jul 9 19:25:26.171: INFO: Container metrics-server-nanny ready: true, restart count 0 Jul 9 19:25:26.171: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container node-agent ready: true, restart count 3 Jul 9 19:25:26.171: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container directory-sync ready: true, restart count 0 Jul 9 19:25:26.171: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:26.171: INFO: Container webconsole ready: true, restart count 0 W0709 19:25:26.229287 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:25:26.329: INFO: Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:25:26.329: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.364515s} Jul 9 19:25:26.329: INFO: Logging node info for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:25:26.369: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:83980,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:25:21 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:25:21 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:25:21 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:25:21 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:25:21 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:25:26.369: INFO: Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:25:26.412: INFO: Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:25:56.470: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250) Jul 9 19:25:56.470: INFO: Logging node info for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:25:56.629: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:84360,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:25:50 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:25:50 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:25:50 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:25:50 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:25:50 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:25:56.629: INFO: Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:25:56.704: INFO: Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:25:57.190: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container pod-checkpointer ready: true, restart count 0 Jul 9 19:25:57.191: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded) Jul 9 19:25:57.191: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:25:57.191: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:25:57.191: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container openshift-controller-manager ready: true, restart count 3 Jul 9 19:25:57.191: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container tectonic-node-controller-operator ready: true, restart count 0 Jul 9 19:25:57.191: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container kube-core-operator ready: true, restart count 0 Jul 9 19:25:57.191: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container tectonic-utility-operator ready: true, restart count 0 Jul 9 19:25:57.191: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container kube-addon-operator ready: true, restart count 0 Jul 9 19:25:57.191: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at (0+0 container statuses recorded) Jul 9 19:25:57.191: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container kube-controller-manager ready: true, restart count 1 Jul 9 19:25:57.191: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded) Jul 9 19:25:57.191: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0 Jul 9 19:25:57.191: INFO: Container tectonic-stats-emitter ready: true, restart count 0 Jul 9 19:25:57.191: INFO: Container tectonic-stats-extender ready: true, restart count 0 Jul 9 19:25:57.191: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container tectonic-channel-operator ready: true, restart count 0 Jul 9 19:25:57.191: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:25:57.191: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container openshift-apiserver ready: true, restart count 0 Jul 9 19:25:57.191: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container tectonic-network-operator ready: true, restart count 0 Jul 9 19:25:57.191: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded) Jul 9 19:25:57.191: INFO: Container dnsmasq ready: true, restart count 0 Jul 9 19:25:57.191: INFO: Container kubedns ready: true, restart count 0 Jul 9 19:25:57.191: INFO: Container sidecar ready: true, restart count 0 Jul 9 19:25:57.191: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container kube-scheduler ready: true, restart count 0 Jul 9 19:25:57.191: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container tectonic-clu ready: true, restart count 0 Jul 9 19:25:57.191: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container kube-apiserver ready: true, restart count 4 Jul 9 19:25:57.191: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container tectonic-node-controller ready: true, restart count 0 Jul 9 19:25:57.191: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container tectonic-alm-operator ready: true, restart count 0 Jul 9 19:25:57.191: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0 Jul 9 19:25:57.191: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded) Jul 9 19:25:57.191: INFO: Container node-agent ready: true, restart count 4 W0709 19:25:57.230591 11713 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:25:57.365: INFO: Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal STEP: Dumping a list of prepulled images on each node... Jul 9 19:25:57.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-fvkds" for this suite. Jul 9 19:26:03.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:26:06.688: INFO: namespace: e2e-tests-dns-fvkds, resource: bindings, ignored listing per whitelist Jul 9 19:26:08.306: INFO: namespace e2e-tests-dns-fvkds deletion completed in 10.855064602s • Failure [61.273 seconds] DNS /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/dns/dns.go:295 should answer endpoint and wildcard queries for the cluster [Conformance] [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/dns/dns.go:298 Jul 9 19:25:25.677: Unexpected results: unexpected count 0/10 for "test_udp@prefix.kubernetes.default": map[] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/dns/dns.go:233 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:26:01.045: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:26:03.227: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-containers-tq6s2 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test override command Jul 9 19:26:04.068: INFO: Waiting up to 5m0s for pod "client-containers-97fa215a-83e8-11e8-881a-28d244b00276" in namespace "e2e-tests-containers-tq6s2" to be "success or failure" Jul 9 19:26:04.107: INFO: Pod "client-containers-97fa215a-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.482891ms Jul 9 19:26:06.162: INFO: Pod "client-containers-97fa215a-83e8-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093839147s STEP: Saw pod success Jul 9 19:26:06.162: INFO: Pod "client-containers-97fa215a-83e8-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:26:06.201: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod client-containers-97fa215a-83e8-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:26:06.310: INFO: Waiting for pod client-containers-97fa215a-83e8-11e8-881a-28d244b00276 to disappear Jul 9 19:26:06.355: INFO: Pod client-containers-97fa215a-83e8-11e8-881a-28d244b00276 no longer exists [AfterEach] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:26:06.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-tq6s2" for this suite. Jul 9 19:26:12.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:26:16.598: INFO: namespace: e2e-tests-containers-tq6s2, resource: bindings, ignored listing per whitelist Jul 9 19:26:17.110: INFO: namespace e2e-tests-containers-tq6s2 deletion completed in 10.689648747s • [SLOW TEST:16.065 seconds] [k8s.io] Docker Containers /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should be able to override the image's default command (docker entrypoint) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:26:05.475: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:26:07.249: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-localssd-87sxs STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 Jul 9 19:26:07.872: INFO: Only supported for providers [gke] (not ) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:26:07.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-localssd-87sxs" for this suite. Jul 9 19:26:14.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:26:15.881: INFO: namespace: e2e-tests-localssd-87sxs, resource: bindings, ignored listing per whitelist Jul 9 19:26:17.378: INFO: namespace e2e-tests-localssd-87sxs deletion completed in 9.474268497s S [SKIPPING] in Spec Setup (BeforeEach) [11.903 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should write and read from node local SSD [Feature:GKELocalSSD] [Suite:openshift/conformance/parallel] [Suite:k8s] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Jul 9 19:26:07.872: Only supported for providers [gke] (not ) /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:26:17.379: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:26:18.922: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-47z7s STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 9 19:26:19.596: INFO: Waiting up to 5m0s for pod "pod-a13c203d-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-emptydir-47z7s" to be "success or failure" Jul 9 19:26:19.628: INFO: Pod "pod-a13c203d-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.742763ms Jul 9 19:26:21.658: INFO: Pod "pod-a13c203d-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062012036s STEP: Saw pod success Jul 9 19:26:21.658: INFO: Pod "pod-a13c203d-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:26:21.703: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-a13c203d-83e8-11e8-8401-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:26:21.779: INFO: Waiting for pod pod-a13c203d-83e8-11e8-8401-28d244b00276 to disappear Jul 9 19:26:21.808: INFO: Pod pod-a13c203d-83e8-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:26:21.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-47z7s" for this suite. Jul 9 19:26:27.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:26:29.745: INFO: namespace: e2e-tests-emptydir-47z7s, resource: bindings, ignored listing per whitelist Jul 9 19:26:31.398: INFO: namespace e2e-tests-emptydir-47z7s deletion completed in 9.556618952s • [SLOW TEST:14.019 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:26:31.400: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:26:33.138: INFO: configPath is now "/tmp/e2e-test-router-metrics-hrv8g-user.kubeconfig" Jul 9 19:26:33.138: INFO: The user is now "e2e-test-router-metrics-hrv8g-user" Jul 9 19:26:33.138: INFO: Creating project "e2e-test-router-metrics-hrv8g" Jul 9 19:26:33.281: INFO: Waiting on permissions in project "e2e-test-router-metrics-hrv8g" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:26:33.422: INFO: namespace : e2e-test-router-metrics-hrv8g api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:26:39.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [8.117 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose prometheus metrics for a route [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:95 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:26:39.518: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:26:39.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:26:39.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] network isolation /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should allow communication from default to non-default namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:41 Jul 9 19:26:39.518: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:26:39.520: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:26:40.992: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-xbsvk STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide container's cpu request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:26:41.632: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae5f0f16-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-downward-api-xbsvk" to be "success or failure" Jul 9 19:26:41.663: INFO: Pod "downwardapi-volume-ae5f0f16-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.538574ms Jul 9 19:26:43.697: INFO: Pod "downwardapi-volume-ae5f0f16-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06537022s Jul 9 19:26:45.732: INFO: Pod "downwardapi-volume-ae5f0f16-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100403679s STEP: Saw pod success Jul 9 19:26:45.732: INFO: Pod "downwardapi-volume-ae5f0f16-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:26:45.762: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-ae5f0f16-83e8-11e8-8401-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:26:45.835: INFO: Waiting for pod downwardapi-volume-ae5f0f16-83e8-11e8-8401-28d244b00276 to disappear Jul 9 19:26:45.910: INFO: Pod downwardapi-volume-ae5f0f16-83e8-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:26:45.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xbsvk" for this suite. Jul 9 19:26:52.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:26:54.102: INFO: namespace: e2e-tests-downward-api-xbsvk, resource: bindings, ignored listing per whitelist Jul 9 19:26:55.665: INFO: namespace e2e-tests-downward-api-xbsvk deletion completed in 9.689986433s • [SLOW TEST:16.145 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide container's cpu request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs generation [Conformance] should deploy based on a status version bump [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:653 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:26:08.308: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:26:10.420: INFO: configPath is now "/tmp/e2e-test-cli-deployment-pqd4k-user.kubeconfig" Jul 9 19:26:10.420: INFO: The user is now "e2e-test-cli-deployment-pqd4k-user" Jul 9 19:26:10.420: INFO: Creating project "e2e-test-cli-deployment-pqd4k" Jul 9 19:26:10.542: INFO: Waiting on permissions in project "e2e-test-cli-deployment-pqd4k" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should deploy based on a status version bump [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:653 STEP: verifying that both latestVersion and generation are updated Jul 9 19:26:10.644: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-pqd4k-user.kubeconfig --namespace=e2e-test-cli-deployment-pqd4k dc/generation-test --output=jsonpath="{.status.latestVersion}"' STEP: checking the latest version for deployment config "generation-test": 1 Jul 9 19:26:10.861: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-pqd4k-user.kubeconfig --namespace=e2e-test-cli-deployment-pqd4k dc/generation-test --output=jsonpath="{.metadata.generation}"' STEP: checking the generation for deployment config "generation-test": 1 STEP: verifying the deployment is marked complete Jul 9 19:26:15.938: INFO: Latest rollout of dc/generation-test (rc/generation-test-1) is complete. STEP: verifying that scaling updates the generation Jul 9 19:26:15.938: INFO: Running 'oc scale --config=/tmp/e2e-test-cli-deployment-pqd4k-user.kubeconfig --namespace=e2e-test-cli-deployment-pqd4k dc/generation-test --replicas=2' Jul 9 19:26:16.242: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-pqd4k-user.kubeconfig --namespace=e2e-test-cli-deployment-pqd4k dc/generation-test --output=jsonpath="{.metadata.generation}"' STEP: checking the generation for deployment config generation-test: 2 STEP: deploying a second time [new client] Jul 9 19:26:16.519: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-pqd4k-user.kubeconfig --namespace=e2e-test-cli-deployment-pqd4k latest dc/generation-test' STEP: verifying that both latestVersion and generation are updated Jul 9 19:26:16.831: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-pqd4k-user.kubeconfig --namespace=e2e-test-cli-deployment-pqd4k dc/generation-test --output=jsonpath="{.status.latestVersion}"' STEP: checking the latest version for deployment config "generation-test": 2 Jul 9 19:26:17.094: INFO: Running 'oc get --config=/tmp/e2e-test-cli-deployment-pqd4k-user.kubeconfig --namespace=e2e-test-cli-deployment-pqd4k dc/generation-test --output=jsonpath="{.metadata.generation}"' STEP: checking the generation for deployment config "generation-test": 3 STEP: verifying that observedGeneration equals generation [AfterEach] generation [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:649 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:26:19.574: INFO: namespace : e2e-test-cli-deployment-pqd4k api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:27:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:57.359 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 generation [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:647 should deploy based on a status version bump [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:653 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:26:55.670: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:26:57.378: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-bckx5 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name configmap-test-volume-b831f8bc-83e8-11e8-8401-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:26:58.179: INFO: Waiting up to 5m0s for pod "pod-configmaps-b83701f0-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-configmap-bckx5" to be "success or failure" Jul 9 19:26:58.209: INFO: Pod "pod-configmaps-b83701f0-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.821944ms Jul 9 19:27:00.275: INFO: Pod "pod-configmaps-b83701f0-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.095996487s STEP: Saw pod success Jul 9 19:27:00.275: INFO: Pod "pod-configmaps-b83701f0-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:27:00.326: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-b83701f0-83e8-11e8-8401-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:27:00.449: INFO: Waiting for pod pod-configmaps-b83701f0-83e8-11e8-8401-28d244b00276 to disappear Jul 9 19:27:00.481: INFO: Pod pod-configmaps-b83701f0-83e8-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:27:00.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bckx5" for this suite. Jul 9 19:27:06.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:27:10.236: INFO: namespace: e2e-tests-configmap-bckx5, resource: bindings, ignored listing per whitelist Jul 9 19:27:10.491: INFO: namespace e2e-tests-configmap-bckx5 deletion completed in 9.932813903s • [SLOW TEST:14.821 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [Conformance][templates] templateinstance readiness test should report failed soon after an annotated objects has failed [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:168 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:27:10.493: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:27:12.298: INFO: configPath is now "/tmp/e2e-test-templates-xkkrs-user.kubeconfig" Jul 9 19:27:12.298: INFO: The user is now "e2e-test-templates-xkkrs-user" Jul 9 19:27:12.298: INFO: Creating project "e2e-test-templates-xkkrs" Jul 9 19:27:12.512: INFO: Waiting on permissions in project "e2e-test-templates-xkkrs" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:101 Jul 9 19:27:12.703: INFO: Running 'oc create --config=/tmp/e2e-test-templates-xkkrs-user.kubeconfig --namespace=e2e-test-templates-xkkrs -f /tmp/fixture-testdata-dir180677416/examples/quickstarts/cakephp-mysql.json' template.template.openshift.io "cakephp-mysql-example" created [It] should report failed soon after an annotated objects has failed [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:168 STEP: instantiating the templateinstance STEP: waiting for build and dc to settle STEP: waiting for the templateinstance to indicate failed [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:112 [AfterEach] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:27:19.462: INFO: namespace : e2e-test-templates-xkkrs api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:27:43.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:33.049 seconds] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:24 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:100 should report failed soon after an annotated objects has failed [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:168 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:27:43.543: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:27:45.082: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-crqtf STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name configmap-test-volume-map-d4999b53-83e8-11e8-8401-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:27:45.797: INFO: Waiting up to 5m0s for pod "pod-configmaps-d49e49c5-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-configmap-crqtf" to be "success or failure" Jul 9 19:27:45.830: INFO: Pod "pod-configmaps-d49e49c5-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.951679ms Jul 9 19:27:47.863: INFO: Pod "pod-configmaps-d49e49c5-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065752289s STEP: Saw pod success Jul 9 19:27:47.863: INFO: Pod "pod-configmaps-d49e49c5-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:27:47.895: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-d49e49c5-83e8-11e8-8401-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:27:47.985: INFO: Waiting for pod pod-configmaps-d49e49c5-83e8-11e8-8401-28d244b00276 to disappear Jul 9 19:27:48.021: INFO: Pod pod-configmaps-d49e49c5-83e8-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:27:48.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-crqtf" for this suite. Jul 9 19:27:54.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:27:55.624: INFO: namespace: e2e-tests-configmap-crqtf, resource: bindings, ignored listing per whitelist Jul 9 19:27:58.189: INFO: namespace e2e-tests-configmap-crqtf deletion completed in 10.1323697s • [SLOW TEST:14.645 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:27:58.191: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:27:59.770: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-hrg64 STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 9 19:28:00.492: INFO: Waiting up to 5m0s for pod "pod-dd5fec0c-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-emptydir-hrg64" to be "success or failure" Jul 9 19:28:00.525: INFO: Pod "pod-dd5fec0c-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.036832ms Jul 9 19:28:02.556: INFO: Pod "pod-dd5fec0c-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064292391s STEP: Saw pod success Jul 9 19:28:02.556: INFO: Pod "pod-dd5fec0c-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:28:02.590: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-dd5fec0c-83e8-11e8-8401-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:28:02.689: INFO: Waiting for pod pod-dd5fec0c-83e8-11e8-8401-28d244b00276 to disappear Jul 9 19:28:02.742: INFO: Pod pod-dd5fec0c-83e8-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:28:02.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hrg64" for this suite. Jul 9 19:28:08.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:28:11.290: INFO: namespace: e2e-tests-emptydir-hrg64, resource: bindings, ignored listing per whitelist Jul 9 19:28:12.322: INFO: namespace e2e-tests-emptydir-hrg64 deletion completed in 9.507135793s • [SLOW TEST:14.131 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Feature:DeploymentConfig] deploymentconfigs keep the deployer pod invariant valid [Conformance] should deal with cancellation after deployer pod succeeded [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1370 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:27:05.669: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:27:07.871: INFO: configPath is now "/tmp/e2e-test-cli-deployment-mcflv-user.kubeconfig" Jul 9 19:27:07.872: INFO: The user is now "e2e-test-cli-deployment-mcflv-user" Jul 9 19:27:07.872: INFO: Creating project "e2e-test-cli-deployment-mcflv" Jul 9 19:27:08.059: INFO: Waiting on permissions in project "e2e-test-cli-deployment-mcflv" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should deal with cancellation after deployer pod succeeded [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1370 STEP: creating DC STEP: waiting for RC to be created STEP: waiting for deployer to be completed STEP: canceling the deployment STEP: redeploying immediately by config change [AfterEach] keep the deployer pod invariant valid [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1236 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:27:25.055: INFO: namespace : e2e-test-cli-deployment-mcflv api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:28:17.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:71.479 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 keep the deployer pod invariant valid [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1233 should deal with cancellation after deployer pod succeeded [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1370 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:28:12.323: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:28:13.904: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-fsmnn STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: creating secret e2e-tests-secrets-fsmnn/secret-test-e5c36c2d-83e8-11e8-8401-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:28:14.600: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5c851fc-83e8-11e8-8401-28d244b00276" in namespace "e2e-tests-secrets-fsmnn" to be "success or failure" Jul 9 19:28:14.633: INFO: Pod "pod-configmaps-e5c851fc-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.087223ms Jul 9 19:28:16.669: INFO: Pod "pod-configmaps-e5c851fc-83e8-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069247938s Jul 9 19:28:18.699: INFO: Pod "pod-configmaps-e5c851fc-83e8-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098868339s STEP: Saw pod success Jul 9 19:28:18.699: INFO: Pod "pod-configmaps-e5c851fc-83e8-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:28:18.734: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-e5c851fc-83e8-11e8-8401-28d244b00276 container env-test: STEP: delete the pod Jul 9 19:28:18.801: INFO: Waiting for pod pod-configmaps-e5c851fc-83e8-11e8-8401-28d244b00276 to disappear Jul 9 19:28:18.831: INFO: Pod pod-configmaps-e5c851fc-83e8-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:28:18.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fsmnn" for this suite. Jul 9 19:28:24.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:28:28.619: INFO: namespace: e2e-tests-secrets-fsmnn, resource: bindings, ignored listing per whitelist Jul 9 19:28:28.781: INFO: namespace e2e-tests-secrets-fsmnn deletion completed in 9.885192483s • [SLOW TEST:16.459 seconds] [sig-api-machinery] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:30 should be consumable via the environment [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SSS ------------------------------ [Conformance][templates] templateinstance readiness test should report ready soon after all annotated objects are ready [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:119 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:26:17.112: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:26:19.292: INFO: configPath is now "/tmp/e2e-test-templates-tjfzt-user.kubeconfig" Jul 9 19:26:19.292: INFO: The user is now "e2e-test-templates-tjfzt-user" Jul 9 19:26:19.292: INFO: Creating project "e2e-test-templates-tjfzt" Jul 9 19:26:19.430: INFO: Waiting on permissions in project "e2e-test-templates-tjfzt" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:101 Jul 9 19:26:19.603: INFO: Running 'oc create --config=/tmp/e2e-test-templates-tjfzt-user.kubeconfig --namespace=e2e-test-templates-tjfzt -f /tmp/fixture-testdata-dir574852015/examples/quickstarts/cakephp-mysql.json' template.template.openshift.io "cakephp-mysql-example" created [It] should report ready soon after all annotated objects are ready [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:119 STEP: instantiating the templateinstance STEP: waiting for build and dc to settle STEP: waiting for the templateinstance to indicate ready [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:112 [AfterEach] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:28:17.315: INFO: namespace : e2e-test-templates-tjfzt api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:28:39.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:142.341 seconds] [Conformance][templates] templateinstance readiness test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:24 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:100 should report ready soon after all annotated objects are ready [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:119 ------------------------------ SSS ------------------------------ [Feature:Builds] Optimized image builds should succeed [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:49 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds] Optimized image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:28:17.153: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds] Optimized image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:28:19.272: INFO: configPath is now "/tmp/e2e-test-build-dockerfile-env-t559b-user.kubeconfig" Jul 9 19:28:19.272: INFO: The user is now "e2e-test-build-dockerfile-env-t559b-user" Jul 9 19:28:19.272: INFO: Creating project "e2e-test-build-dockerfile-env-t559b" Jul 9 19:28:19.408: INFO: Waiting on permissions in project "e2e-test-build-dockerfile-env-t559b" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:31 Jul 9 19:28:19.452: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:35 STEP: waiting for builder service account [It] should succeed [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:49 STEP: creating a build directly Jul 9 19:28:19.647: INFO: Waiting for optimized to complete Jul 9 19:28:30.747: INFO: Done waiting for optimized: util.BuildResult{BuildPath:"builds/optimized", BuildName:"optimized", StartBuildStdErr:"", StartBuildStdOut:"", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421a58600), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e34a0)} with error: Jul 9 19:28:30.799: INFO: Running 'oc logs --config=/tmp/e2e-test-build-dockerfile-env-t559b-user.kubeconfig --namespace=e2e-test-build-dockerfile-env-t559b -f builds/optimized --timestamps' Jul 9 19:28:33.227: INFO: Build logs: &{builds/optimized optimized %!s(*build.Build=&{{ } {optimized e2e-test-build-dockerfile-env-t559b /apis/build.openshift.io/v1/namespaces/e2e-test-build-dockerfile-env-t559b/builds/optimized e8ceca33-83e8-11e8-aa51-0af96768d57e 86437 0 {{0 63666786499 0x6b11480}} map[] map[openshift.io/build.pod-name:optimized-build] [] [] } {{ { 0xc4206adb90 [] []} {0xc420e2be30 } { []} {map[] map[]} {[] [] } map[]} []} {Complete false 0xc421075da0 0xc421075e00 8000000000 {} [{Build {{0 63666786502 0x6b11480}} 2525 [{DockerBuild {{0 63666786502 0x6b11480}} 2525}]}] }}) %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(bool=false) %!s(bool=false) %!s(util.LogDumperFunc=) %!s(*util.CLI=&{oc /tmp/e2e-test-build-dockerfile-env-t559b-user.kubeconfig /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig e2e-test-build-dockerfile-env-t559b-user [] [] [] [e2e-test-build-dockerfile-env-t559b] false false 0xc421024640})} [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:42 [AfterEach] [Feature:Builds] Optimized image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:28:33.358: INFO: namespace : e2e-test-build-dockerfile-env-t559b api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds] Optimized image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:28:39.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:22.390 seconds] [Feature:Builds] Optimized image builds /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:17 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:29 should succeed [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:49 ------------------------------ [Feature:Prometheus][Conformance] Prometheus when installed to the cluster should start and expose a secured proxy and unsecured metrics [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:44 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Prometheus][Conformance] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:23:57.402: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Prometheus][Conformance] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:23:59.062: INFO: configPath is now "/tmp/e2e-test-prometheus-9ggxp-user.kubeconfig" Jul 9 19:23:59.062: INFO: The user is now "e2e-test-prometheus-9ggxp-user" Jul 9 19:23:59.062: INFO: Creating project "e2e-test-prometheus-9ggxp" Jul 9 19:23:59.275: INFO: Waiting on permissions in project "e2e-test-prometheus-9ggxp" ... [BeforeEach] [Feature:Prometheus][Conformance] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:39 [It] should start and expose a secured proxy and unsecured metrics [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:44 Jul 9 19:24:14.855: INFO: Creating new exec pod STEP: checking the unsecured metrics path Jul 9 19:24:20.982: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k "https://prometheus.kube-system.svc:443/metrics"' Jul 9 19:24:21.834: INFO: stderr: "" STEP: verifying the oauth-proxy reports a 403 on the root URL Jul 9 19:24:21.842: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -k -s -o /dev/null -w '%{http_code}' "https://prometheus.kube-system.svc:443"' Jul 9 19:24:22.659: INFO: stderr: "" STEP: verifying a service account token is able to authenticate Jul 9 19:24:22.659: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -k -s -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' -o /dev/null -w '%{http_code}' "https://prometheus.kube-system.svc:443/graph"' Jul 9 19:24:23.472: INFO: stderr: "" STEP: verifying a service account token is able to access the Prometheus API Jul 9 19:24:23.472: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:24.370: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:25.371: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:26.453: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:27.456: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:28.446: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:29.447: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:30.324: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:31.326: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:32.471: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:33.475: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:34.302: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:35.304: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:36.240: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:37.243: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:38.264: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:39.266: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:40.297: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:41.298: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:42.430: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:43.431: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:44.326: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:45.329: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:46.150: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:47.152: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:47.927: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:48.928: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:49.917: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:50.918: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:51.951: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:52.952: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:53.891: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:54.893: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:55.788: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:56.791: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:57.749: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:24:58.750: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:24:59.885: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:00.886: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:01.914: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:02.916: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:03.853: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:04.855: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:06.191: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:07.192: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:08.193: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:09.194: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:10.272: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:11.275: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:12.276: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:13.277: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:14.323: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:15.325: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:16.100: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:17.101: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:17.908: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:18.911: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:19.785: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:20.788: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:21.691: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:22.694: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:23.706: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:24.714: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:25.597: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:26.598: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:27.719: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:28.721: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:29.590: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:30.591: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:31.389: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:32.391: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:33.257: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:34.259: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:35.058: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:36.059: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:36.921: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:37.923: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:39.295: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:40.297: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:41.123: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:42.126: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:43.084: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:44.085: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:45.521: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:46.522: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:47.540: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:48.541: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:49.606: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:50.608: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:51.636: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:52.638: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:53.612: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:54.613: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:55.670: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:56.671: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:57.988: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:25:58.989: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:25:59.971: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:00.972: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:01.920: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:02.922: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:03.983: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:04.984: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:06.157: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:07.158: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:08.177: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:09.178: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:10.227: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:11.229: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:12.408: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:13.410: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:14.472: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:15.473: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:16.586: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:17.587: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:18.541: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:19.543: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:21.077: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:22.078: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:23.455: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:24.457: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:25.743: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:26.745: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:27.939: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:28.940: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:30.118: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:31.119: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:32.604: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:33.606: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:35.633: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:36.635: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:39.306: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:40.307: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:41.231: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:42.233: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:43.892: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:44.894: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:45.980: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:46.983: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:48.134: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:49.135: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:50.159: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:51.161: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:52.127: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:53.129: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:54.122: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:55.124: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:56.241: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:57.243: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:26:58.302: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:26:59.306: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:00.549: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:01.551: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:02.766: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:03.768: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:04.814: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:05.816: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:06.872: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:07.875: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:08.914: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:09.917: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:11.036: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:12.039: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:13.183: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:14.184: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:15.711: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:16.713: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:18.307: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:19.309: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:20.595: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:21.599: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:23.064: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:24.066: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:26.317: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:27.320: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:29.229: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:30.230: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:31.512: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:32.516: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:33.740: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:34.742: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:35.879: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:36.883: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:38.201: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:39.202: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:40.279: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:41.281: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:42.229: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:43.230: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:44.116: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:45.118: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:46.066: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:47.068: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:48.240: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:49.242: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:50.294: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:51.297: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:52.300: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:53.302: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:54.319: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:55.321: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:56.316: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:57.318: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:27:58.326: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:27:59.328: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:00.461: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:01.462: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:02.426: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:03.429: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:04.565: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:05.566: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:06.680: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:07.681: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:08.834: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:09.837: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:10.823: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:11.824: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:12.863: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:13.864: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:15.010: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:16.011: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:17.311: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:18.313: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:19.484: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:20.486: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:21.575: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:22.577: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:23.799: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:24.801: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:25.812: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:26.813: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:27.887: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:28.889: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:30.054: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:31.057: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:32.155: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:33.156: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:34.145: INFO: stderr: "" STEP: verifying all expected jobs have a working target Jul 9 19:28:35.148: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-prometheus-9ggxp execpodts6g4 -- /bin/sh -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlci10b2tlbi1nbW16NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLXJlYWRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE3YTRlZGIzLTgzYjktMTFlOC04NGM2LTBhZjk2NzY4ZDU3ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpwcm9tZXRoZXVzLXJlYWRlciJ9.I7Tfj6MfIbWy1XD5yyNl9TA5mIhFW_lAwzupgAqZ_dCqHWzNSoiGQuM1oB521nap-7AxRwuMvYlHPHTKm07z3ntJQB6GFFThIjx965ExnOy9B1fHleads4rm1mtwP1QFmHEmkhMGK98QitmRf0rXJsEmAgk5H47uPh0gh2MZSms8y7ERvjM9OwQKUx5CYFjSwh5uTbGad6IrvVjvDrBcfNweo8cMgT1EHI2bU3fUedjPcj7Y_94rpAOKOQP8QFMZRAl0sX6CcDNe1VVvDuwx9yi7JhPMSbQDNU0HHgNB3hqJO8ov2HFx6ws2e52eCWxjs-oVzFj5Z0Gki2JJaDIZ_w' "https://prometheus.kube-system.svc:443/api/v1/targets"' Jul 9 19:28:36.234: INFO: stderr: "" STEP: verifying all expected jobs have a working target [AfterEach] [Feature:Prometheus][Conformance] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:28:37.350: INFO: namespace : e2e-test-prometheus-9ggxp api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Prometheus][Conformance] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:28:45.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [288.062 seconds] [Feature:Prometheus][Conformance] Prometheus /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:30 when installed to the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:43 should start and expose a secured proxy and unsecured metrics [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:44 Expected <[]error | len:3, cap:4>: [ { s: "no match for map[job:kubernetes-apiservers] with health up", }, { s: "no match for map[job:kubernetes-nodes] with health up", }, { s: "no match for map[job:kubernetes-cadvisor] with health up", }, ] to be empty /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:117 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:28:39.456: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:28:41.700: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-vvlbr STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 9 19:28:42.636: INFO: Waiting up to 5m0s for pod "pod-f67bfcfb-83e8-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-vvlbr" to be "success or failure" Jul 9 19:28:42.678: INFO: Pod "pod-f67bfcfb-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 42.125643ms Jul 9 19:28:44.781: INFO: Pod "pod-f67bfcfb-83e8-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145116153s Jul 9 19:28:46.826: INFO: Pod "pod-f67bfcfb-83e8-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189465379s STEP: Saw pod success Jul 9 19:28:46.826: INFO: Pod "pod-f67bfcfb-83e8-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:28:46.866: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-f67bfcfb-83e8-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:28:46.967: INFO: Waiting for pod pod-f67bfcfb-83e8-11e8-881a-28d244b00276 to disappear Jul 9 19:28:47.006: INFO: Pod pod-f67bfcfb-83e8-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:28:47.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vvlbr" for this suite. Jul 9 19:28:53.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:28:56.171: INFO: namespace: e2e-tests-emptydir-vvlbr, resource: bindings, ignored listing per whitelist Jul 9 19:28:58.132: INFO: namespace e2e-tests-emptydir-vvlbr deletion completed in 11.078647718s • [SLOW TEST:18.676 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SSSS ------------------------------ [Feature:Builds][Conformance] s2i build with a root user image should create a root build and fail without a privileged SCC [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:48 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:28:58.137: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:29:00.530: INFO: configPath is now "/tmp/e2e-test-s2i-build-root-8sn6t-user.kubeconfig" Jul 9 19:29:00.530: INFO: The user is now "e2e-test-s2i-build-root-8sn6t-user" Jul 9 19:29:00.530: INFO: Creating project "e2e-test-s2i-build-root-8sn6t" Jul 9 19:29:00.728: INFO: Waiting on permissions in project "e2e-test-s2i-build-root-8sn6t" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:24 Jul 9 19:29:00.840: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:28 STEP: waiting for builder service account STEP: creating a root build container Jul 9 19:29:00.983: INFO: Running 'oc new-build --config=/tmp/e2e-test-s2i-build-root-8sn6t-user.kubeconfig --namespace=e2e-test-s2i-build-root-8sn6t -D FROM centos/nodejs-6-centos7 USER 0 --name nodejsroot' --> Found Docker image 7e95117 (2 weeks old) from Docker Hub for "centos/nodejs-6-centos7" Node.js 6 --------- Node.js 6 available as container is a base platform for building and running various Node.js 6 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Tags: builder, nodejs, nodejs6 * An image stream will be created as "nodejs-6-centos7:latest" that will track the source image * A Docker build using a predefined Dockerfile will be created * The resulting image will be pushed to image stream "nodejsroot:latest" * Every time "nodejs-6-centos7:latest" changes a new build will be triggered --> Creating resources with label build=nodejsroot ... imagestream "nodejs-6-centos7" created imagestream "nodejsroot" created buildconfig "nodejsroot" created --> Success Build configuration "nodejsroot" created and build triggered. Run 'oc logs -f bc/nodejsroot' to stream the build progress. [It] should create a root build and fail without a privileged SCC [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:48 Jul 9 19:29:24.206: INFO: Running 'oc new-app --config=/tmp/e2e-test-s2i-build-root-8sn6t-user.kubeconfig --namespace=e2e-test-s2i-build-root-8sn6t nodejsroot~https://github.com/openshift/nodejs-ex --name nodejsfail' --> Found image 89b21e6 (16 seconds old) in image stream "e2e-test-s2i-build-root-8sn6t/nodejsroot" under tag "latest" for "nodejsroot" Node.js 6 --------- Node.js 6 available as container is a base platform for building and running various Node.js 6 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Tags: builder, nodejs, nodejs6 * A source build using source code from https://github.com/openshift/nodejs-ex will be created * The resulting image will be pushed to image stream "nodejsfail:latest" * Use 'start-build' to trigger a new build * This image will be deployed in deployment config "nodejsfail" * Port 8080/tcp will be load balanced by service "nodejsfail" * Other containers can access this service through the hostname "nodejsfail" * WARNING: Image "e2e-test-s2i-build-root-8sn6t/nodejsroot:latest" runs as the 'root' user which may not be permitted by your cluster administrator --> Creating resources ... imagestream "nodejsfail" created buildconfig "nodejsfail" created deploymentconfig "nodejsfail" created service "nodejsfail" created --> Success Build scheduled, use 'oc logs -f bc/nodejsfail' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/nodejsfail' Run 'oc status' to view your app. Jul 9 19:29:30.933: INFO: WaitForABuild returning with error: The build "nodejsfail-1" status is "Failed" [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:41 [AfterEach] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:29:31.117: INFO: namespace : e2e-test-s2i-build-root-8sn6t api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:29:37.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:39.079 seconds] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:16 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:23 should create a root build and fail without a privileged SCC [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:48 ------------------------------ [Feature:Builds] forcePull should affect pulling builder images ForcePull test case execution docker [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:114 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:28:39.544: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:28:41.861: INFO: configPath is now "/tmp/e2e-test-forcepull-2v9cq-user.kubeconfig" Jul 9 19:28:41.861: INFO: The user is now "e2e-test-forcepull-2v9cq-user" Jul 9 19:28:41.861: INFO: Creating project "e2e-test-forcepull-2v9cq" Jul 9 19:28:41.993: INFO: Waiting on permissions in project "e2e-test-forcepull-2v9cq" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:68 Jul 9 19:28:42.059: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false STEP: granting system:build-strategy-custom Jul 9 19:28:42.060: INFO: Running 'oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-forcepull-2v9cq clusterrolebinding custombuildaccess-e2e-test-forcepull-2v9cq-user --clusterrole system:build-strategy-custom --user e2e-test-forcepull-2v9cq-user' clusterrolebinding.rbac.authorization.k8s.io "custombuildaccess-e2e-test-forcepull-2v9cq-user" created STEP: waiting for openshift/ruby:latest ImageStreamTag STEP: waiting for an is importer to import a tag latest into a stream ruby STEP: create application build configs for 3 strategies Jul 9 19:28:42.390: INFO: Running 'oc create --config=/tmp/e2e-test-forcepull-2v9cq-user.kubeconfig --namespace=e2e-test-forcepull-2v9cq -f /tmp/fixture-testdata-dir877664294/test/extended/testdata/forcepull-test.json' buildconfig.build.openshift.io "ruby-sample-build-tc" created buildconfig.build.openshift.io "ruby-sample-build-td" created buildconfig.build.openshift.io "ruby-sample-build-ts" created [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:99 STEP: waiting for builder service account [It] ForcePull test case execution docker [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:114 STEP: docker when force pull is true Jul 9 19:28:42.960: INFO: Running 'oc start-build --config=/tmp/e2e-test-forcepull-2v9cq-user.kubeconfig --namespace=e2e-test-forcepull-2v9cq ruby-sample-build-td -o=name' Jul 9 19:28:43.330: INFO: start-build output with args [ruby-sample-build-td -o=name]: Error> StdOut> build/ruby-sample-build-td-1 StdErr> Jul 9 19:28:43.331: INFO: Waiting for ruby-sample-build-td-1 to complete Jul 9 19:29:09.405: INFO: Done waiting for ruby-sample-build-td-1: util.BuildResult{BuildPath:"build/ruby-sample-build-td-1", BuildName:"ruby-sample-build-td-1", StartBuildStdErr:"", StartBuildStdOut:"build/ruby-sample-build-td-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420424600), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e2960)} with error: Jul 9 19:29:09.405: INFO: Running 'oc logs --config=/tmp/e2e-test-forcepull-2v9cq-user.kubeconfig --namespace=e2e-test-forcepull-2v9cq -f build/ruby-sample-build-td-1 --timestamps' found pull image line 2018-07-10T02:28:47.980482324Z Pulling image docker-registry.default.svc:5000/openshift/ruby@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae ... Jul 9 19:29:09.818: INFO: Running 'oc start-build --config=/tmp/e2e-test-forcepull-2v9cq-user.kubeconfig --namespace=e2e-test-forcepull-2v9cq ruby-sample-build-td -o=name' Jul 9 19:29:10.142: INFO: start-build output with args [ruby-sample-build-td -o=name]: Error> StdOut> build/ruby-sample-build-td-2 StdErr> Jul 9 19:29:10.143: INFO: Waiting for ruby-sample-build-td-2 to complete Jul 9 19:29:36.217: INFO: Done waiting for ruby-sample-build-td-2: util.BuildResult{BuildPath:"build/ruby-sample-build-td-2", BuildName:"ruby-sample-build-td-2", StartBuildStdErr:"", StartBuildStdOut:"build/ruby-sample-build-td-2", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4208fbb00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e2960)} with error: Jul 9 19:29:36.217: INFO: Running 'oc logs --config=/tmp/e2e-test-forcepull-2v9cq-user.kubeconfig --namespace=e2e-test-forcepull-2v9cq -f build/ruby-sample-build-td-2 --timestamps' found pull image line 2018-07-10T02:29:15.710460246Z Pulling image docker-registry.default.svc:5000/openshift/ruby@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae ... [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:87 Jul 9 19:29:36.697: INFO: Running 'oc delete --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-forcepull-2v9cq clusterrolebinding custombuildaccess-e2e-test-forcepull-2v9cq-user' clusterrolebinding.rbac.authorization.k8s.io "custombuildaccess-e2e-test-forcepull-2v9cq-user" deleted [AfterEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:29:37.036: INFO: namespace : e2e-test-forcepull-2v9cq api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:29:43.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.572 seconds] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:62 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:66 ForcePull test case execution docker [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:114 ------------------------------ [Conformance][templates] templateinstance impersonation tests should pass impersonation deletion tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:352 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:29:37.218: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:29:39.619: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-user.kubeconfig" Jul 9 19:29:39.619: INFO: The user is now "e2e-test-templates-t6wbh-user" Jul 9 19:29:39.619: INFO: Creating project "e2e-test-templates-t6wbh" Jul 9 19:29:39.731: INFO: Waiting on permissions in project "e2e-test-templates-t6wbh" ... [BeforeEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:57 Jul 9 19:29:40.980: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-adminuser.kubeconfig" Jul 9 19:29:41.224: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-impersonateuser.kubeconfig" Jul 9 19:29:41.493: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-impersonatebygroupuser.kubeconfig" Jul 9 19:29:41.739: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-edituser1.kubeconfig" Jul 9 19:29:42.009: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-edituser2.kubeconfig" Jul 9 19:29:42.277: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-viewuser.kubeconfig" Jul 9 19:29:42.561: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-impersonatebygroupuser.kubeconfig" [It] should pass impersonation deletion tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:352 STEP: testing as system:admin user STEP: testing as e2e-test-templates-t6wbh-adminuser user Jul 9 19:29:42.928: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-adminuser.kubeconfig" STEP: testing as e2e-test-templates-t6wbh-impersonateuser user Jul 9 19:29:43.230: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-impersonateuser.kubeconfig" STEP: testing as e2e-test-templates-t6wbh-impersonatebygroupuser user Jul 9 19:29:43.545: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-impersonatebygroupuser.kubeconfig" STEP: testing as e2e-test-templates-t6wbh-edituser1 user Jul 9 19:29:43.867: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-edituser1.kubeconfig" STEP: testing as e2e-test-templates-t6wbh-edituser2 user Jul 9 19:29:44.162: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-edituser2.kubeconfig" STEP: testing as e2e-test-templates-t6wbh-viewuser user Jul 9 19:29:44.506: INFO: configPath is now "/tmp/e2e-test-templates-t6wbh-viewuser.kubeconfig" [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:29:44.736: INFO: namespace : e2e-test-templates-t6wbh api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:29:50.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:221 • [SLOW TEST:13.966 seconds] [Conformance][templates] templateinstance impersonation tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:27 should pass impersonation deletion tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:352 ------------------------------ [Feature:DeploymentConfig] deploymentconfigs when run iteratively [Conformance] should immediately start a new deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:195 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:29:43.118: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:29:45.183: INFO: configPath is now "/tmp/e2e-test-cli-deployment-dsvs4-user.kubeconfig" Jul 9 19:29:45.183: INFO: The user is now "e2e-test-cli-deployment-dsvs4-user" Jul 9 19:29:45.183: INFO: Creating project "e2e-test-cli-deployment-dsvs4" Jul 9 19:29:45.333: INFO: Waiting on permissions in project "e2e-test-cli-deployment-dsvs4" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should immediately start a new deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:195 Jul 9 19:29:45.453: INFO: Running 'oc set env --config=/tmp/e2e-test-cli-deployment-dsvs4-user.kubeconfig --namespace=e2e-test-cli-deployment-dsvs4 dc/deployment-simple TRY=ONCE' STEP: by checking that the deployment config has the correct version STEP: by checking that the second deployment exists STEP: by checking that the first deployer was deleted and the second deployer exists [AfterEach] when run iteratively [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:102 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:29:49.963: INFO: namespace : e2e-test-cli-deployment-dsvs4 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:04.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:21.000 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 when run iteratively [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:100 should immediately start a new deployment [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:195 ------------------------------ [Feature:Builds][pruning] prune builds based on settings in the buildconfig should prune completed builds based on the successfulBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:63 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:25:50.441: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:25:52.111: INFO: configPath is now "/tmp/e2e-test-build-pruning-9nhxf-user.kubeconfig" Jul 9 19:25:52.111: INFO: The user is now "e2e-test-build-pruning-9nhxf-user" Jul 9 19:25:52.111: INFO: Creating project "e2e-test-build-pruning-9nhxf" Jul 9 19:25:52.231: INFO: Waiting on permissions in project "e2e-test-build-pruning-9nhxf" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:37 Jul 9 19:25:52.287: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:41 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:25:52.424: INFO: Running scan #0 Jul 9 19:25:52.424: INFO: Checking language ruby Jul 9 19:25:52.467: INFO: Checking tag 2.0 Jul 9 19:25:52.467: INFO: Checking tag 2.2 Jul 9 19:25:52.467: INFO: Checking tag 2.3 Jul 9 19:25:52.467: INFO: Checking tag 2.4 Jul 9 19:25:52.467: INFO: Checking tag 2.5 Jul 9 19:25:52.467: INFO: Checking tag latest Jul 9 19:25:52.467: INFO: Checking language nodejs Jul 9 19:25:52.511: INFO: Checking tag 4 Jul 9 19:25:52.511: INFO: Checking tag 6 Jul 9 19:25:52.511: INFO: Checking tag 8 Jul 9 19:25:52.511: INFO: Checking tag latest Jul 9 19:25:52.511: INFO: Checking tag 0.10 Jul 9 19:25:52.511: INFO: Checking language perl Jul 9 19:25:52.556: INFO: Checking tag latest Jul 9 19:25:52.556: INFO: Checking tag 5.16 Jul 9 19:25:52.556: INFO: Checking tag 5.20 Jul 9 19:25:52.556: INFO: Checking tag 5.24 Jul 9 19:25:52.556: INFO: Checking language php Jul 9 19:25:52.595: INFO: Checking tag 5.5 Jul 9 19:25:52.595: INFO: Checking tag 5.6 Jul 9 19:25:52.595: INFO: Checking tag 7.0 Jul 9 19:25:52.595: INFO: Checking tag 7.1 Jul 9 19:25:52.595: INFO: Checking tag latest Jul 9 19:25:52.595: INFO: Checking language python Jul 9 19:25:52.633: INFO: Checking tag 2.7 Jul 9 19:25:52.633: INFO: Checking tag 3.3 Jul 9 19:25:52.633: INFO: Checking tag 3.4 Jul 9 19:25:52.633: INFO: Checking tag 3.5 Jul 9 19:25:52.633: INFO: Checking tag 3.6 Jul 9 19:25:52.633: INFO: Checking tag latest Jul 9 19:25:52.633: INFO: Checking language wildfly Jul 9 19:25:52.681: INFO: Checking tag latest Jul 9 19:25:52.681: INFO: Checking tag 10.0 Jul 9 19:25:52.681: INFO: Checking tag 10.1 Jul 9 19:25:52.681: INFO: Checking tag 11.0 Jul 9 19:25:52.681: INFO: Checking tag 12.0 Jul 9 19:25:52.681: INFO: Checking tag 8.1 Jul 9 19:25:52.681: INFO: Checking tag 9.0 Jul 9 19:25:52.681: INFO: Checking language mysql Jul 9 19:25:52.733: INFO: Checking tag 5.5 Jul 9 19:25:52.733: INFO: Checking tag 5.6 Jul 9 19:25:52.733: INFO: Checking tag 5.7 Jul 9 19:25:52.733: INFO: Checking tag latest Jul 9 19:25:52.733: INFO: Checking language postgresql Jul 9 19:25:52.780: INFO: Checking tag 9.6 Jul 9 19:25:52.780: INFO: Checking tag latest Jul 9 19:25:52.780: INFO: Checking tag 9.2 Jul 9 19:25:52.780: INFO: Checking tag 9.4 Jul 9 19:25:52.780: INFO: Checking tag 9.5 Jul 9 19:25:52.780: INFO: Checking language mongodb Jul 9 19:25:52.828: INFO: Checking tag 2.4 Jul 9 19:25:52.828: INFO: Checking tag 2.6 Jul 9 19:25:52.828: INFO: Checking tag 3.2 Jul 9 19:25:52.828: INFO: Checking tag 3.4 Jul 9 19:25:52.828: INFO: Checking tag latest Jul 9 19:25:52.828: INFO: Checking language jenkins Jul 9 19:25:52.878: INFO: Checking tag 1 Jul 9 19:25:52.878: INFO: Checking tag 2 Jul 9 19:25:52.878: INFO: Checking tag latest Jul 9 19:25:52.878: INFO: Success! STEP: creating test image stream Jul 9 19:25:52.878: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-9nhxf-user.kubeconfig --namespace=e2e-test-build-pruning-9nhxf -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-pruning/imagestream.yaml' imagestream.image.openshift.io "myphp" created [It] should prune completed builds based on the successfulBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:63 STEP: creating test successful build config Jul 9 19:25:53.209: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-9nhxf-user.kubeconfig --namespace=e2e-test-build-pruning-9nhxf -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-pruning/successful-build-config.yaml' buildconfig.build.openshift.io "myphp" created STEP: starting four test builds Jul 9 19:25:53.493: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-9nhxf-user.kubeconfig --namespace=e2e-test-build-pruning-9nhxf myphp -o=name' Jul 9 19:25:53.795: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-1 StdErr> Jul 9 19:25:53.796: INFO: Waiting for myphp-1 to complete Jul 9 19:26:39.893: INFO: Done waiting for myphp-1: util.BuildResult{BuildPath:"build/myphp-1", BuildName:"myphp-1", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420ccac00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004def0)} with error: Jul 9 19:26:39.893: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-9nhxf-user.kubeconfig --namespace=e2e-test-build-pruning-9nhxf myphp -o=name' Jul 9 19:26:40.177: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-2 StdErr> Jul 9 19:26:40.177: INFO: Waiting for myphp-2 to complete Jul 9 19:27:41.270: INFO: Done waiting for myphp-2: util.BuildResult{BuildPath:"build/myphp-2", BuildName:"myphp-2", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-2", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421563b00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004def0)} with error: Jul 9 19:27:41.270: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-9nhxf-user.kubeconfig --namespace=e2e-test-build-pruning-9nhxf myphp -o=name' Jul 9 19:27:41.580: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-3 StdErr> Jul 9 19:27:41.581: INFO: Waiting for myphp-3 to complete Jul 9 19:28:22.677: INFO: Done waiting for myphp-3: util.BuildResult{BuildPath:"build/myphp-3", BuildName:"myphp-3", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-3", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420abfb00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004def0)} with error: Jul 9 19:28:22.677: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-9nhxf-user.kubeconfig --namespace=e2e-test-build-pruning-9nhxf myphp -o=name' Jul 9 19:28:23.062: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-4 StdErr> Jul 9 19:28:23.062: INFO: Waiting for myphp-4 to complete Jul 9 19:29:09.159: INFO: Done waiting for myphp-4: util.BuildResult{BuildPath:"build/myphp-4", BuildName:"myphp-4", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-4", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421878300), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004def0)} with error: STEP: waiting up to one minute for pruning to complete timed out waiting for the condition[AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:56 [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:30:09.403: INFO: namespace : e2e-test-build-pruning-9nhxf api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:15.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:265.040 seconds] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:21 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:35 should prune completed builds based on the successfulBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:63 ------------------------------ S ------------------------------ [sig-storage] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:15.483: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:30:16.866: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-l9s4m STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating projection with secret that has name projected-secret-test-map-2f27a1c3-83e9-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:30:17.725: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2f2c3054-83e9-11e8-bd2e-28d244b00276" in namespace "e2e-tests-projected-l9s4m" to be "success or failure" Jul 9 19:30:17.758: INFO: Pod "pod-projected-secrets-2f2c3054-83e9-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.495344ms Jul 9 19:30:19.787: INFO: Pod "pod-projected-secrets-2f2c3054-83e9-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061944559s STEP: Saw pod success Jul 9 19:30:19.787: INFO: Pod "pod-projected-secrets-2f2c3054-83e9-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:30:19.816: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-secrets-2f2c3054-83e9-11e8-bd2e-28d244b00276 container projected-secret-volume-test: STEP: delete the pod Jul 9 19:30:19.883: INFO: Waiting for pod pod-projected-secrets-2f2c3054-83e9-11e8-bd2e-28d244b00276 to disappear Jul 9 19:30:19.910: INFO: Pod pod-projected-secrets-2f2c3054-83e9-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:19.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l9s4m" for this suite. Jul 9 19:30:26.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:30:29.306: INFO: namespace: e2e-tests-projected-l9s4m, resource: bindings, ignored listing per whitelist Jul 9 19:30:29.521: INFO: namespace e2e-tests-projected-l9s4m deletion completed in 9.568484707s • [SLOW TEST:14.039 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:407 [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:04.118: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-services1-nxl8f STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:06.356: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-services2-xlqn8 STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 Jul 9 19:30:08.699: INFO: Only one node is available in this environment ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) [AfterEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:08.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-nxl8f" for this suite. Jul 9 19:30:14.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:30:16.679: INFO: namespace: e2e-tests-net-services1-nxl8f, resource: bindings, ignored listing per whitelist Jul 9 19:30:19.280: INFO: namespace e2e-tests-net-services1-nxl8f deletion completed in 10.52743131s [AfterEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:19.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-xlqn8" for this suite. Jul 9 19:30:25.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:30:27.649: INFO: namespace: e2e-tests-net-services2-xlqn8, resource: bindings, ignored listing per whitelist Jul 9 19:30:29.812: INFO: namespace e2e-tests-net-services2-xlqn8 deletion completed in 10.489192197s S [SKIPPING] [25.694 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:406 should allow connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:31 Jul 9 19:30:08.699: Only one node is available in this environment ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:28:45.469: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:28:47.194: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-s6vx7 STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:28:47.927: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node STEP: Creating configMap with name cm-test-opt-del-f9ac72f7-83e8-11e8-8fe2-28d244b00276 STEP: Creating configMap with name cm-test-opt-upd-f9ac7364-83e8-11e8-8fe2-28d244b00276 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f9ac72f7-83e8-11e8-8fe2-28d244b00276 STEP: Updating configmap cm-test-opt-upd-f9ac7364-83e8-11e8-8fe2-28d244b00276 STEP: Creating configMap with name cm-test-opt-create-f9ac739f-83e8-11e8-8fe2-28d244b00276 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:10.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-s6vx7" for this suite. Jul 9 19:30:32.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:30:34.960: INFO: namespace: e2e-tests-configmap-s6vx7, resource: bindings, ignored listing per whitelist Jul 9 19:30:36.288: INFO: namespace e2e-tests-configmap-s6vx7 deletion completed in 25.75514973s • [SLOW TEST:110.818 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:30:36.291: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:36.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:36.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] [Area:Networking] network isolation /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should prevent communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:32 Jul 9 19:30:36.291: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [Feature:DeploymentConfig] deploymentconfigs initially [Conformance] should not deploy if pods never transition to ready [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:930 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:29:51.185: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:29:53.380: INFO: configPath is now "/tmp/e2e-test-cli-deployment-5zqwh-user.kubeconfig" Jul 9 19:29:53.380: INFO: The user is now "e2e-test-cli-deployment-5zqwh-user" Jul 9 19:29:53.380: INFO: Creating project "e2e-test-cli-deployment-5zqwh" Jul 9 19:29:53.542: INFO: Waiting on permissions in project "e2e-test-cli-deployment-5zqwh" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should not deploy if pods never transition to ready [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:930 STEP: waiting for the deployment to fail [AfterEach] initially [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:926 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:30:28.758: INFO: namespace : e2e-test-cli-deployment-5zqwh api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:50.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:59.667 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 initially [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:924 should not deploy if pods never transition to ready [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:930 ------------------------------ [sig-api-machinery] Downward API should provide host IP as an env var [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:36.298: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:30:38.041: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-s598m STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward api env vars Jul 9 19:30:38.751: INFO: Waiting up to 5m0s for pod "downward-api-3bb4c65b-83e9-11e8-8fe2-28d244b00276" in namespace "e2e-tests-downward-api-s598m" to be "success or failure" Jul 9 19:30:38.783: INFO: Pod "downward-api-3bb4c65b-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.239622ms Jul 9 19:30:40.818: INFO: Pod "downward-api-3bb4c65b-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066104527s Jul 9 19:30:42.855: INFO: Pod "downward-api-3bb4c65b-83e9-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104047678s STEP: Saw pod success Jul 9 19:30:42.856: INFO: Pod "downward-api-3bb4c65b-83e9-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:30:42.891: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downward-api-3bb4c65b-83e9-11e8-8fe2-28d244b00276 container dapi-container: STEP: delete the pod Jul 9 19:30:42.966: INFO: Waiting for pod downward-api-3bb4c65b-83e9-11e8-8fe2-28d244b00276 to disappear Jul 9 19:30:43.004: INFO: Pod downward-api-3bb4c65b-83e9-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:43.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-s598m" for this suite. Jul 9 19:30:49.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:30:51.237: INFO: namespace: e2e-tests-downward-api-s598m, resource: bindings, ignored listing per whitelist Jul 9 19:30:52.904: INFO: namespace e2e-tests-downward-api-s598m deletion completed in 9.861169902s • [SLOW TEST:16.606 seconds] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:37 should provide host IP as an env var [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431 Jul 9 19:30:52.906: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:52.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430 should support a 'default-deny' policy [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:52 Jul 9 19:30:52.906: This plugin does not implement NetworkPolicy. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [Feature:Builds][Conformance] build without output image building from templates should create an image from a docker template without an output image reference defined [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:36 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:29.523: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:30:31.302: INFO: configPath is now "/tmp/e2e-test-build-no-outputname-z5d5c-user.kubeconfig" Jul 9 19:30:31.302: INFO: The user is now "e2e-test-build-no-outputname-z5d5c-user" Jul 9 19:30:31.302: INFO: Creating project "e2e-test-build-no-outputname-z5d5c" Jul 9 19:30:31.464: INFO: Waiting on permissions in project "e2e-test-build-no-outputname-z5d5c" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:22 Jul 9 19:30:31.517: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [It] should create an image from a docker template without an output image reference defined [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:36 Jul 9 19:30:31.517: INFO: Running 'oc create --config=/tmp/e2e-test-build-no-outputname-z5d5c-user.kubeconfig --namespace=e2e-test-build-no-outputname-z5d5c -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/test-docker-no-outputname.json' buildconfig.build.openshift.io "test-docker" created STEP: expecting build to pass without an output image reference specified Jul 9 19:30:31.820: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-no-outputname-z5d5c-user.kubeconfig --namespace=e2e-test-build-no-outputname-z5d5c test-docker -o=name' Jul 9 19:30:32.202: INFO: start-build output with args [test-docker -o=name]: Error> StdOut> build/test-docker-1 StdErr> Jul 9 19:30:32.202: INFO: Waiting for test-docker-1 to complete Jul 9 19:30:53.279: INFO: Done waiting for test-docker-1: util.BuildResult{BuildPath:"build/test-docker-1", BuildName:"test-docker-1", StartBuildStdErr:"", StartBuildStdOut:"build/test-docker-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4219c4300), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4210492c0)} with error: STEP: verifying the build test-docker-1 output Jul 9 19:30:53.279: INFO: Running 'oc logs --config=/tmp/e2e-test-build-no-outputname-z5d5c-user.kubeconfig --namespace=e2e-test-build-no-outputname-z5d5c -f build/test-docker-1 --timestamps' Build log: 2018-07-10T02:30:33.4936115Z I0710 02:30:33.493342 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-docker-1","namespace":"e2e-test-build-no-outputname-z5d5c","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-no-outputname-z5d5c/builds/test-docker-1","uid":"37d077a5-83e9-11e8-aa51-0af96768d57e","resourceVersion":"87930","creationTimestamp":"2018-07-10T02:30:32Z","labels":{"buildconfig":"test-docker","name":"test-docker","openshift.io/build-config.name":"test-docker","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test-docker","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test-docker","uid":"3790b4e0-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/ruby-hello-world"}},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"centos/ruby-22-centos7"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","config":{"kind":"BuildConfig","namespace":"e2e-test-build-no-outputname-z5d5c","name":"test-docker"},"output":{}}} 2018-07-10T02:30:33.494138196Z Cloning "https://github.com/openshift/ruby-hello-world" ... 2018-07-10T02:30:33.494222547Z I0710 02:30:33.494114 1 source.go:207] git ls-remote --heads https://github.com/openshift/ruby-hello-world 2018-07-10T02:30:33.494236163Z I0710 02:30:33.494149 1 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world 2018-07-10T02:30:33.721243086Z I0710 02:30:33.721118 1 source.go:207] cf1fa898d2a78685ccde72f14b4922b474f73cd1 refs/heads/beta2 2018-07-10T02:30:33.721273473Z 2602ace61490de0513dfbd7c7de949356cf9bd17 refs/heads/beta3 2018-07-10T02:30:33.721280611Z 394e0f7c0446d65d163ecae9cf5b559ad60de6dd refs/heads/beta4 2018-07-10T02:30:33.721286371Z 11e9bbac1dcf5a06df07f5a6ab893a3cb9448011 refs/heads/blog_part1 2018-07-10T02:30:33.721291851Z 5619f11232c0a623f7da419438539335d49acfa3 refs/heads/config 2018-07-10T02:30:33.721297973Z 7ccd3242c49c3868195ca9400a539fa611111096 refs/heads/master 2018-07-10T02:30:33.721303628Z 9f70e0daf56b57d7f3cc012020df06ba7f914d0f refs/heads/revert-64-feature/fix-for-ruby-2.5-compatibility 2018-07-10T02:30:33.721309787Z ffa3f8596f3f82c0ee224f1b1d0c23102b1ad1f1 refs/heads/revert-66-feature/fix-for-ruby-2.5-compatibility-with-ci 2018-07-10T02:30:33.721316036Z d71bdd56df54d7400e1f72dc0929280e43627138 refs/heads/revert-69-gemfile 2018-07-10T02:30:33.72132198Z faccd39c6857edb7a3015cc6837fb347613f23c3 refs/heads/undo 2018-07-10T02:30:33.7213281Z I0710 02:30:33.721148 1 source.go:64] Cloning source from https://github.com/openshift/ruby-hello-world 2018-07-10T02:30:33.721380142Z I0710 02:30:33.721191 1 repository.go:388] Executing git clone --recursive --depth=1 https://github.com/openshift/ruby-hello-world /tmp/build/inputs 2018-07-10T02:30:34.134768751Z I0710 02:30:34.134629 1 repository.go:388] Executing git rev-parse --abbrev-ref HEAD 2018-07-10T02:30:34.135970758Z I0710 02:30:34.135895 1 repository.go:388] Executing git rev-parse --verify HEAD 2018-07-10T02:30:34.137131457Z I0710 02:30:34.137053 1 repository.go:388] Executing git --no-pager show -s --format=%an HEAD 2018-07-10T02:30:34.138565313Z I0710 02:30:34.138487 1 repository.go:388] Executing git --no-pager show -s --format=%ae HEAD 2018-07-10T02:30:34.139962567Z I0710 02:30:34.139855 1 repository.go:388] Executing git --no-pager show -s --format=%cn HEAD 2018-07-10T02:30:34.14128393Z I0710 02:30:34.141178 1 repository.go:388] Executing git --no-pager show -s --format=%ce HEAD 2018-07-10T02:30:34.142685099Z I0710 02:30:34.142599 1 repository.go:388] Executing git --no-pager show -s --format=%ad HEAD 2018-07-10T02:30:34.14410529Z I0710 02:30:34.144019 1 repository.go:388] Executing git --no-pager show -s --format=%<(80,trunc)%s HEAD 2018-07-10T02:30:34.145540799Z I0710 02:30:34.145461 1 repository.go:388] Executing git config --get remote.origin.url 2018-07-10T02:30:34.150400623Z Commit: 7ccd3242c49c3868195ca9400a539fa611111096 (Merge pull request #71 from bparees/gemfile2) 2018-07-10T02:30:34.150508164Z Author: Ben Parees 2018-07-10T02:30:34.150560871Z Date: Fri Feb 9 18:24:07 2018 -0500 2018-07-10T02:30:34.15065618Z I0710 02:30:34.150606 1 repository.go:388] Executing git rev-parse --abbrev-ref HEAD 2018-07-10T02:30:34.153742601Z I0710 02:30:34.153662 1 repository.go:388] Executing git rev-parse --verify HEAD 2018-07-10T02:30:34.154808988Z I0710 02:30:34.154724 1 repository.go:388] Executing git --no-pager show -s --format=%an HEAD 2018-07-10T02:30:34.156086109Z I0710 02:30:34.155986 1 repository.go:388] Executing git --no-pager show -s --format=%ae HEAD 2018-07-10T02:30:34.157519866Z I0710 02:30:34.157443 1 repository.go:388] Executing git --no-pager show -s --format=%cn HEAD 2018-07-10T02:30:34.15886129Z I0710 02:30:34.158783 1 repository.go:388] Executing git --no-pager show -s --format=%ce HEAD 2018-07-10T02:30:34.160335012Z I0710 02:30:34.160256 1 repository.go:388] Executing git --no-pager show -s --format=%ad HEAD 2018-07-10T02:30:34.16173438Z I0710 02:30:34.161656 1 repository.go:388] Executing git --no-pager show -s --format=%<(80,trunc)%s HEAD 2018-07-10T02:30:34.163074948Z I0710 02:30:34.162969 1 repository.go:388] Executing git config --get remote.origin.url 2018-07-10T02:30:35.704263918Z I0710 02:30:35.703965 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-docker-1","namespace":"e2e-test-build-no-outputname-z5d5c","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-no-outputname-z5d5c/builds/test-docker-1","uid":"37d077a5-83e9-11e8-aa51-0af96768d57e","resourceVersion":"87930","creationTimestamp":"2018-07-10T02:30:32Z","labels":{"buildconfig":"test-docker","name":"test-docker","openshift.io/build-config.name":"test-docker","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test-docker","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test-docker","uid":"3790b4e0-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/ruby-hello-world"}},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"centos/ruby-22-centos7"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","config":{"kind":"BuildConfig","namespace":"e2e-test-build-no-outputname-z5d5c","name":"test-docker"},"output":{}}} 2018-07-10T02:30:35.704741081Z I0710 02:30:35.704646 1 builder.go:289] Checking for presence of a Dockerfile 2018-07-10T02:30:35.704865549Z I0710 02:30:35.704798 1 source.go:119] Found git source info: git.SourceInfo{SourceInfo:git.SourceInfo{Ref:"master", CommitID:"7ccd3242c49c3868195ca9400a539fa611111096", Date:"Fri Feb 9 18:24:07 2018 -0500", AuthorName:"Ben Parees", AuthorEmail:"bparees@users.noreply.github.com", CommitterName:"GitHub", CommitterEmail:"noreply@github.com", Message:"Merge pull request #71 from bparees/gemfile2", Location:"https://github.com/openshift/ruby-hello-world", ContextDir:""}} 2018-07-10T02:30:35.705485488Z I0710 02:30:35.705401 1 source.go:123] Replacing dockerfile 2018-07-10T02:30:35.705500284Z FROM centos/ruby-22-centos7 2018-07-10T02:30:35.705504908Z USER default 2018-07-10T02:30:35.705508527Z EXPOSE 8080 2018-07-10T02:30:35.705512022Z ENV RACK_ENV production 2018-07-10T02:30:35.705515582Z ENV RAILS_ENV production 2018-07-10T02:30:35.705518972Z COPY . /opt/app-root/src/ 2018-07-10T02:30:35.705522422Z RUN scl enable rh-ruby22 "bundle install" 2018-07-10T02:30:35.705525995Z CMD ["scl", "enable", "rh-ruby22", "./run.sh"] 2018-07-10T02:30:35.705530092Z 2018-07-10T02:30:35.705533524Z USER root 2018-07-10T02:30:35.705536981Z RUN chmod og+rw /opt/app-root/src/db 2018-07-10T02:30:35.705540455Z USER default 2018-07-10T02:30:35.705543783Z 2018-07-10T02:30:35.705547237Z with: 2018-07-10T02:30:35.705550687Z FROM centos/ruby-22-centos7 2018-07-10T02:30:35.705554524Z ENV "BUILD_LOGLEVEL"="5" 2018-07-10T02:30:35.705558085Z USER default 2018-07-10T02:30:35.705561482Z EXPOSE 8080 2018-07-10T02:30:35.705564842Z ENV RACK_ENV=production 2018-07-10T02:30:35.705576085Z ENV RAILS_ENV=production 2018-07-10T02:30:35.705580014Z COPY . /opt/app-root/src/ 2018-07-10T02:30:35.705583517Z RUN scl enable rh-ruby22 "bundle install" 2018-07-10T02:30:35.705587098Z CMD ["scl","enable","rh-ruby22","./run.sh"] 2018-07-10T02:30:35.705590962Z USER root 2018-07-10T02:30:35.705594363Z RUN chmod og+rw /opt/app-root/src/db 2018-07-10T02:30:35.705597797Z USER default 2018-07-10T02:30:35.705601271Z ENV "OPENSHIFT_BUILD_NAME"="test-docker-1" "OPENSHIFT_BUILD_NAMESPACE"="e2e-test-build-no-outputname-z5d5c" "OPENSHIFT_BUILD_SOURCE"="https://github.com/openshift/ruby-hello-world" "OPENSHIFT_BUILD_COMMIT"="7ccd3242c49c3868195ca9400a539fa611111096" 2018-07-10T02:30:35.705605835Z LABEL "io.openshift.build.commit.author"="Ben Parees \u003cbparees@users.noreply.github.com\u003e" "io.openshift.build.commit.date"="Fri Feb 9 18:24:07 2018 -0500" "io.openshift.build.commit.id"="7ccd3242c49c3868195ca9400a539fa611111096" "io.openshift.build.commit.message"="Merge pull request #71 from bparees/gemfile2" "io.openshift.build.commit.ref"="master" "io.openshift.build.name"="test-docker-1" "io.openshift.build.namespace"="e2e-test-build-no-outputname-z5d5c" "io.openshift.build.source-location"="https://github.com/openshift/ruby-hello-world" 2018-07-10T02:30:36.656528763Z I0710 02:30:36.656270 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-docker-1","namespace":"e2e-test-build-no-outputname-z5d5c","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-no-outputname-z5d5c/builds/test-docker-1","uid":"37d077a5-83e9-11e8-aa51-0af96768d57e","resourceVersion":"87930","creationTimestamp":"2018-07-10T02:30:32Z","labels":{"buildconfig":"test-docker","name":"test-docker","openshift.io/build-config.name":"test-docker","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test-docker","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test-docker","uid":"3790b4e0-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/ruby-hello-world"}},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"centos/ruby-22-centos7"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","config":{"kind":"BuildConfig","namespace":"e2e-test-build-no-outputname-z5d5c","name":"test-docker"},"output":{}}} 2018-07-10T02:30:36.657091763Z I0710 02:30:36.657003 1 util_linux.go:96] found cgroup parent /kubepods/besteffort/pod37dd6e0f-83e9-11e8-84c6-0af96768d57e 2018-07-10T02:30:36.65717724Z I0710 02:30:36.657046 1 builder.go:223] Running build with cgroup limits: api.CGroupLimits{MemoryLimitBytes:92233720368547, CPUShares:0, CPUPeriod:0, CPUQuota:0, MemorySwap:92233720368547, Parent:"/kubepods/besteffort/pod37dd6e0f-83e9-11e8-84c6-0af96768d57e"} 2018-07-10T02:30:36.657193404Z I0710 02:30:36.657083 1 builder.go:240] Starting Docker build from build config test-docker-1 ... 2018-07-10T02:30:36.660542941Z I0710 02:30:36.660447 1 docker.go:347] container type= 2018-07-10T02:30:36.660629179Z I0710 02:30:36.660572 1 docker.go:385] Invoking Docker build to create "temp.builder.openshift.io/e2e-test-build-no-outputname-z5d5c/test-docker-1:7e8b24fb" 2018-07-10T02:30:36.660761209Z I0710 02:30:36.660695 1 tar.go:217] Adding "/tmp/build/inputs" to tar ... 2018-07-10T02:30:36.662440754Z I0710 02:30:36.662336 1 tar.go:312] Adding to tar: /tmp/build/inputs/.gitignore as .gitignore 2018-07-10T02:30:36.663452559Z I0710 02:30:36.663359 1 tar.go:312] Adding to tar: /tmp/build/inputs/.s2i as .s2i 2018-07-10T02:30:36.663579098Z I0710 02:30:36.663507 1 tar.go:312] Adding to tar: /tmp/build/inputs/.s2i/bin as .s2i/bin 2018-07-10T02:30:36.663658432Z I0710 02:30:36.663612 1 tar.go:312] Adding to tar: /tmp/build/inputs/.s2i/bin/README as .s2i/bin/README 2018-07-10T02:30:36.663798323Z I0710 02:30:36.663751 1 tar.go:312] Adding to tar: /tmp/build/inputs/.s2i/environment as .s2i/environment 2018-07-10T02:30:36.663945332Z I0710 02:30:36.663897 1 tar.go:312] Adding to tar: /tmp/build/inputs/.travis.yml as .travis.yml 2018-07-10T02:30:36.664118859Z I0710 02:30:36.664086 1 tar.go:312] Adding to tar: /tmp/build/inputs/Dockerfile as Dockerfile 2018-07-10T02:30:36.664259106Z I0710 02:30:36.664215 1 tar.go:312] Adding to tar: /tmp/build/inputs/Gemfile as Gemfile 2018-07-10T02:30:36.664417096Z I0710 02:30:36.664364 1 tar.go:312] Adding to tar: /tmp/build/inputs/Gemfile.lock as Gemfile.lock 2018-07-10T02:30:36.664635348Z I0710 02:30:36.664588 1 tar.go:312] Adding to tar: /tmp/build/inputs/README.md as README.md 2018-07-10T02:30:36.664777066Z I0710 02:30:36.664731 1 tar.go:312] Adding to tar: /tmp/build/inputs/Rakefile as Rakefile 2018-07-10T02:30:36.665668099Z I0710 02:30:36.664878 1 tar.go:312] Adding to tar: /tmp/build/inputs/app.rb as app.rb 2018-07-10T02:30:36.665682108Z I0710 02:30:36.665036 1 tar.go:312] Adding to tar: /tmp/build/inputs/config as config 2018-07-10T02:30:36.665688721Z I0710 02:30:36.665127 1 tar.go:312] Adding to tar: /tmp/build/inputs/config/database.rb as config/database.rb 2018-07-10T02:30:36.665694566Z I0710 02:30:36.665253 1 tar.go:312] Adding to tar: /tmp/build/inputs/config/database.yml as config/database.yml 2018-07-10T02:30:36.665700355Z I0710 02:30:36.665360 1 tar.go:312] Adding to tar: /tmp/build/inputs/config.ru as config.ru 2018-07-10T02:30:36.665705716Z I0710 02:30:36.665488 1 tar.go:312] Adding to tar: /tmp/build/inputs/db as db 2018-07-10T02:30:36.665711551Z I0710 02:30:36.665596 1 tar.go:312] Adding to tar: /tmp/build/inputs/db/migrate as db/migrate 2018-07-10T02:30:36.665726395Z I0710 02:30:36.665701 1 tar.go:312] Adding to tar: /tmp/build/inputs/db/migrate/20141102191902_create_key_pair.rb as db/migrate/20141102191902_create_key_pair.rb 2018-07-10T02:30:36.665882146Z I0710 02:30:36.665802 1 tar.go:312] Adding to tar: /tmp/build/inputs/models.rb as models.rb 2018-07-10T02:30:36.666174056Z I0710 02:30:36.665934 1 tar.go:312] Adding to tar: /tmp/build/inputs/run.sh as run.sh 2018-07-10T02:30:36.666186594Z I0710 02:30:36.666095 1 tar.go:312] Adding to tar: /tmp/build/inputs/test as test 2018-07-10T02:30:36.666338905Z I0710 02:30:36.666200 1 tar.go:312] Adding to tar: /tmp/build/inputs/test/sample_test.rb as test/sample_test.rb 2018-07-10T02:30:36.66645798Z I0710 02:30:36.666348 1 tar.go:312] Adding to tar: /tmp/build/inputs/views as views 2018-07-10T02:30:36.666505751Z I0710 02:30:36.666458 1 tar.go:312] Adding to tar: /tmp/build/inputs/views/main.erb as views/main.erb 2018-07-10T02:30:36.718155718Z Step 1/14 : FROM centos/ruby-22-centos7 2018-07-10T02:30:36.719532079Z ---> e42d0dccf073 2018-07-10T02:30:36.71959659Z Step 2/14 : ENV "BUILD_LOGLEVEL"="5" 2018-07-10T02:30:36.720909571Z ---> Using cache 2018-07-10T02:30:36.720924508Z ---> 9b6c5431cbe4 2018-07-10T02:30:36.720931228Z Step 3/14 : USER default 2018-07-10T02:30:36.751864393Z ---> Running in 9a8e2dc54197 2018-07-10T02:30:36.867490588Z Removing intermediate container 9a8e2dc54197 2018-07-10T02:30:36.867510562Z ---> 79fadb51b3bd 2018-07-10T02:30:36.867517487Z Step 4/14 : EXPOSE 8080 2018-07-10T02:30:36.909086099Z ---> Running in 90ae69d32247 2018-07-10T02:30:36.993360882Z Removing intermediate container 90ae69d32247 2018-07-10T02:30:36.993380964Z ---> c91e63363f40 2018-07-10T02:30:36.993388123Z Step 5/14 : ENV RACK_ENV=production 2018-07-10T02:30:37.025904745Z ---> Running in 28f91c214a57 2018-07-10T02:30:37.155038315Z Removing intermediate container 28f91c214a57 2018-07-10T02:30:37.155056175Z ---> 32b136798da9 2018-07-10T02:30:37.155062763Z Step 6/14 : ENV RAILS_ENV=production 2018-07-10T02:30:37.186801789Z ---> Running in c0898becd96e 2018-07-10T02:30:37.294657783Z Removing intermediate container c0898becd96e 2018-07-10T02:30:37.294687467Z ---> e0ed5ddebc64 2018-07-10T02:30:37.294694281Z Step 7/14 : COPY . /opt/app-root/src/ 2018-07-10T02:30:37.427668923Z ---> 675c088e76f9 2018-07-10T02:30:37.427724052Z Step 8/14 : RUN scl enable rh-ruby22 "bundle install" 2018-07-10T02:30:37.459821838Z ---> Running in cbd97e10f02c 2018-07-10T02:30:40.301549169Z Fetching gem metadata from https://rubygems.org/.......... 2018-07-10T02:30:40.549153327Z Installing rake 12.3.0 2018-07-10T02:30:40.717001857Z Installing concurrent-ruby 1.0.5 2018-07-10T02:30:40.826211378Z Installing i18n 0.9.3 2018-07-10T02:30:40.920466479Z Installing minitest 5.11.3 2018-07-10T02:30:41.088328073Z Installing thread_safe 0.3.6 2018-07-10T02:30:41.353413568Z Installing tzinfo 1.2.5 2018-07-10T02:30:41.696679212Z Installing activesupport 5.1.4 2018-07-10T02:30:41.86878882Z Installing activemodel 5.1.4 2018-07-10T02:30:41.979410311Z Installing arel 8.0.0 2018-07-10T02:30:42.205308322Z Installing activerecord 5.1.4 2018-07-10T02:30:42.294454443Z Installing mustermann 1.0.1 2018-07-10T02:30:47.888148672Z Installing mysql2 0.4.10 2018-07-10T02:30:48.0715785Z Installing rack 2.0.4 2018-07-10T02:30:48.149561097Z Installing rack-protection 2.0.0 2018-07-10T02:30:48.25561431Z Installing tilt 2.0.8 2018-07-10T02:30:48.384223031Z Installing sinatra 2.0.0 2018-07-10T02:30:48.445067264Z Installing sinatra-activerecord 2.0.13 2018-07-10T02:30:48.44524702Z Using bundler 1.7.8 2018-07-10T02:30:48.446378228Z Your bundle is complete! 2018-07-10T02:30:48.446392704Z Use `bundle show [gemname]` to see where a bundled gem is installed. 2018-07-10T02:30:49.176207601Z Removing intermediate container cbd97e10f02c 2018-07-10T02:30:49.176228076Z ---> 752d4f69461b 2018-07-10T02:30:49.176234688Z Step 9/14 : CMD ["scl","enable","rh-ruby22","./run.sh"] 2018-07-10T02:30:49.23250105Z ---> Running in 65555ceee0d0 2018-07-10T02:30:49.321269802Z Removing intermediate container 65555ceee0d0 2018-07-10T02:30:49.321294218Z ---> 5b4fda607e09 2018-07-10T02:30:49.321301326Z Step 10/14 : USER root 2018-07-10T02:30:49.352984712Z ---> Running in fe28b9624c5e 2018-07-10T02:30:49.443994027Z Removing intermediate container fe28b9624c5e 2018-07-10T02:30:49.444056271Z ---> bc656aab9e12 2018-07-10T02:30:49.444067257Z Step 11/14 : RUN chmod og+rw /opt/app-root/src/db 2018-07-10T02:30:49.476106688Z ---> Running in 5745024daf14 2018-07-10T02:30:49.748376218Z Removing intermediate container 5745024daf14 2018-07-10T02:30:49.748396749Z ---> d19ea8feaef6 2018-07-10T02:30:49.748403677Z Step 12/14 : USER default 2018-07-10T02:30:49.778651951Z ---> Running in 715b004d247c 2018-07-10T02:30:49.868127651Z Removing intermediate container 715b004d247c 2018-07-10T02:30:49.868146586Z ---> ae0d171c8941 2018-07-10T02:30:49.868153526Z Step 13/14 : ENV "OPENSHIFT_BUILD_NAME"="test-docker-1" "OPENSHIFT_BUILD_NAMESPACE"="e2e-test-build-no-outputname-z5d5c" "OPENSHIFT_BUILD_SOURCE"="https://github.com/openshift/ruby-hello-world" "OPENSHIFT_BUILD_COMMIT"="7ccd3242c49c3868195ca9400a539fa611111096" 2018-07-10T02:30:49.897657266Z ---> Running in 610b97b98eaa 2018-07-10T02:30:49.984896502Z Removing intermediate container 610b97b98eaa 2018-07-10T02:30:49.984915678Z ---> bcfe64067755 2018-07-10T02:30:49.984922313Z Step 14/14 : LABEL "io.openshift.build.commit.author"="Ben Parees \u003cbparees@users.noreply.github.com\u003e" "io.openshift.build.commit.date"="Fri Feb 9 18:24:07 2018 -0500" "io.openshift.build.commit.id"="7ccd3242c49c3868195ca9400a539fa611111096" "io.openshift.build.commit.message"="Merge pull request #71 from bparees/gemfile2" "io.openshift.build.commit.ref"="master" "io.openshift.build.name"="test-docker-1" "io.openshift.build.namespace"="e2e-test-build-no-outputname-z5d5c" "io.openshift.build.source-location"="https://github.com/openshift/ruby-hello-world" 2018-07-10T02:30:50.04066242Z ---> Running in fb6283992801 2018-07-10T02:30:50.146079062Z Removing intermediate container fb6283992801 2018-07-10T02:30:50.146099189Z ---> 00bd90a2110a 2018-07-10T02:30:50.146648064Z Successfully built 00bd90a2110a 2018-07-10T02:30:50.154298204Z Successfully tagged temp.builder.openshift.io/e2e-test-build-no-outputname-z5d5c/test-docker-1:7e8b24fb 2018-07-10T02:30:50.242738424Z Build complete, no image push requested [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:26 [AfterEach] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:30:53.698: INFO: namespace : e2e-test-build-no-outputname-z5d5c api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:59.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:30.241 seconds] [Feature:Builds][Conformance] build without output image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:12 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:20 building from templates /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:33 should create an image from a docker template without an output image reference defined [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:36 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:50.856: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:30:53.080: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-dcr58 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 9 19:30:53.864: INFO: Waiting up to 5m0s for pod "pod-44b5014d-83e9-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-dcr58" to be "success or failure" Jul 9 19:30:53.906: INFO: Pod "pod-44b5014d-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 42.397355ms Jul 9 19:30:55.958: INFO: Pod "pod-44b5014d-83e9-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093831679s STEP: Saw pod success Jul 9 19:30:55.958: INFO: Pod "pod-44b5014d-83e9-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:30:56.026: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-44b5014d-83e9-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:30:56.118: INFO: Waiting for pod pod-44b5014d-83e9-11e8-881a-28d244b00276 to disappear Jul 9 19:30:56.159: INFO: Pod pod-44b5014d-83e9-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:30:56.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dcr58" for this suite. Jul 9 19:31:02.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:31:04.586: INFO: namespace: e2e-tests-emptydir-dcr58, resource: bindings, ignored listing per whitelist Jul 9 19:31:07.559: INFO: namespace e2e-tests-emptydir-dcr58 deletion completed in 11.346110356s • [SLOW TEST:16.704 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:31:07.820: INFO: Could not check network plugin name: exit status 1. Assuming a non-OpenShift plugin Jul 9 19:31:07.820: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:07.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:07.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.260 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should allow connections to services in the default namespace from a pod in another namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:48 Jul 9 19:31:07.820: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [Feature:Builds] build with empty source started build should build even with an empty source in build config [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:43 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds] build with empty source /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:31:07.825: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds] build with empty source /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:31:10.154: INFO: configPath is now "/tmp/e2e-test-cli-build-nosrc-mjgsf-user.kubeconfig" Jul 9 19:31:10.154: INFO: The user is now "e2e-test-cli-build-nosrc-mjgsf-user" Jul 9 19:31:10.154: INFO: Creating project "e2e-test-cli-build-nosrc-mjgsf" Jul 9 19:31:10.391: INFO: Waiting on permissions in project "e2e-test-cli-build-nosrc-mjgsf" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:24 Jul 9 19:31:10.443: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:28 STEP: waiting for builder service account Jul 9 19:31:10.576: INFO: Running 'oc create --config=/tmp/e2e-test-cli-build-nosrc-mjgsf-user.kubeconfig --namespace=e2e-test-cli-build-nosrc-mjgsf -f /tmp/fixture-testdata-dir574852015/test/extended/testdata/builds/test-nosrc-build.json' imagestream.image.openshift.io "nosrc-stream" created buildconfig.build.openshift.io "nosrc-build" created [It] should build even with an empty source in build config [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:43 STEP: starting the empty source build Jul 9 19:31:10.861: INFO: Running 'oc start-build --config=/tmp/e2e-test-cli-build-nosrc-mjgsf-user.kubeconfig --namespace=e2e-test-cli-build-nosrc-mjgsf nosrc-build --from-dir=/tmp/fixture-testdata-dir574852015/test/extended/testdata/builds/test-build-app -o=name' Jul 9 19:31:13.474: INFO: start-build output with args [nosrc-build --from-dir=/tmp/fixture-testdata-dir574852015/test/extended/testdata/builds/test-build-app -o=name]: Error> StdOut> build/nosrc-build-1 StdErr> Uploading directory "/tmp/fixture-testdata-dir574852015/test/extended/testdata/builds/test-build-app" as binary input for the build ... Jul 9 19:31:13.475: INFO: Waiting for nosrc-build-1 to complete Jul 9 19:31:24.578: INFO: Done waiting for nosrc-build-1: util.BuildResult{BuildPath:"build/nosrc-build-1", BuildName:"nosrc-build-1", StartBuildStdErr:"Uploading directory \"/tmp/fixture-testdata-dir574852015/test/extended/testdata/builds/test-build-app\" as binary input for the build ...", StartBuildStdOut:"build/nosrc-build-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421a5e300), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004d3b0)} with error: STEP: verifying the status of "build/nosrc-build-1" [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:35 [AfterEach] [Feature:Builds] build with empty source /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:31:24.702: INFO: namespace : e2e-test-cli-build-nosrc-mjgsf api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds] build with empty source /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:30.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:22.968 seconds] [Feature:Builds] build with empty source /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:14 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:22 started build /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:42 should build even with an empty source in build config [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/nosrc.go:43 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:29.814: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:30:31.769: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-v4kvw STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-v4kvw Jul 9 19:30:36.722: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-v4kvw STEP: checking the pod's current state and verifying that restartCount is present Jul 9 19:30:36.758: INFO: Initial restart count of pod liveness-exec is 0 Jul 9 19:31:23.816: INFO: Restart count of pod e2e-tests-container-probe-v4kvw/liveness-exec is now 1 (47.057956811s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:23.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-v4kvw" for this suite. Jul 9 19:31:30.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:31:33.855: INFO: namespace: e2e-tests-container-probe-v4kvw, resource: bindings, ignored listing per whitelist Jul 9 19:31:34.654: INFO: namespace e2e-tests-container-probe-v4kvw deletion completed in 10.740602886s • [SLOW TEST:64.840 seconds] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Feature:Builds][pruning] prune builds based on settings in the buildconfig should prune failed builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:108 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:59.766: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:31:01.494: INFO: configPath is now "/tmp/e2e-test-build-pruning-ln5ms-user.kubeconfig" Jul 9 19:31:01.494: INFO: The user is now "e2e-test-build-pruning-ln5ms-user" Jul 9 19:31:01.494: INFO: Creating project "e2e-test-build-pruning-ln5ms" Jul 9 19:31:01.655: INFO: Waiting on permissions in project "e2e-test-build-pruning-ln5ms" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:37 Jul 9 19:31:01.715: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:41 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:31:01.854: INFO: Running scan #0 Jul 9 19:31:01.854: INFO: Checking language ruby Jul 9 19:31:01.893: INFO: Checking tag 2.4 Jul 9 19:31:01.893: INFO: Checking tag 2.5 Jul 9 19:31:01.893: INFO: Checking tag latest Jul 9 19:31:01.893: INFO: Checking tag 2.0 Jul 9 19:31:01.893: INFO: Checking tag 2.2 Jul 9 19:31:01.893: INFO: Checking tag 2.3 Jul 9 19:31:01.893: INFO: Checking language nodejs Jul 9 19:31:01.956: INFO: Checking tag latest Jul 9 19:31:01.956: INFO: Checking tag 0.10 Jul 9 19:31:01.956: INFO: Checking tag 4 Jul 9 19:31:01.956: INFO: Checking tag 6 Jul 9 19:31:01.956: INFO: Checking tag 8 Jul 9 19:31:01.956: INFO: Checking language perl Jul 9 19:31:02.001: INFO: Checking tag 5.20 Jul 9 19:31:02.001: INFO: Checking tag 5.24 Jul 9 19:31:02.001: INFO: Checking tag latest Jul 9 19:31:02.001: INFO: Checking tag 5.16 Jul 9 19:31:02.001: INFO: Checking language php Jul 9 19:31:02.051: INFO: Checking tag 7.0 Jul 9 19:31:02.051: INFO: Checking tag 7.1 Jul 9 19:31:02.051: INFO: Checking tag latest Jul 9 19:31:02.051: INFO: Checking tag 5.5 Jul 9 19:31:02.051: INFO: Checking tag 5.6 Jul 9 19:31:02.051: INFO: Checking language python Jul 9 19:31:02.094: INFO: Checking tag 2.7 Jul 9 19:31:02.094: INFO: Checking tag 3.3 Jul 9 19:31:02.094: INFO: Checking tag 3.4 Jul 9 19:31:02.094: INFO: Checking tag 3.5 Jul 9 19:31:02.094: INFO: Checking tag 3.6 Jul 9 19:31:02.094: INFO: Checking tag latest Jul 9 19:31:02.094: INFO: Checking language wildfly Jul 9 19:31:02.154: INFO: Checking tag latest Jul 9 19:31:02.154: INFO: Checking tag 10.0 Jul 9 19:31:02.154: INFO: Checking tag 10.1 Jul 9 19:31:02.154: INFO: Checking tag 11.0 Jul 9 19:31:02.154: INFO: Checking tag 12.0 Jul 9 19:31:02.154: INFO: Checking tag 8.1 Jul 9 19:31:02.154: INFO: Checking tag 9.0 Jul 9 19:31:02.154: INFO: Checking language mysql Jul 9 19:31:02.198: INFO: Checking tag 5.7 Jul 9 19:31:02.198: INFO: Checking tag latest Jul 9 19:31:02.198: INFO: Checking tag 5.5 Jul 9 19:31:02.198: INFO: Checking tag 5.6 Jul 9 19:31:02.199: INFO: Checking language postgresql Jul 9 19:31:02.258: INFO: Checking tag 9.5 Jul 9 19:31:02.258: INFO: Checking tag 9.6 Jul 9 19:31:02.258: INFO: Checking tag latest Jul 9 19:31:02.258: INFO: Checking tag 9.2 Jul 9 19:31:02.258: INFO: Checking tag 9.4 Jul 9 19:31:02.258: INFO: Checking language mongodb Jul 9 19:31:02.300: INFO: Checking tag 2.4 Jul 9 19:31:02.300: INFO: Checking tag 2.6 Jul 9 19:31:02.300: INFO: Checking tag 3.2 Jul 9 19:31:02.300: INFO: Checking tag 3.4 Jul 9 19:31:02.300: INFO: Checking tag latest Jul 9 19:31:02.300: INFO: Checking language jenkins Jul 9 19:31:02.432: INFO: Checking tag 1 Jul 9 19:31:02.432: INFO: Checking tag 2 Jul 9 19:31:02.432: INFO: Checking tag latest Jul 9 19:31:02.432: INFO: Success! STEP: creating test image stream Jul 9 19:31:02.432: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-ln5ms-user.kubeconfig --namespace=e2e-test-build-pruning-ln5ms -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-pruning/imagestream.yaml' imagestream.image.openshift.io "myphp" created [It] should prune failed builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:108 STEP: creating test failed build config Jul 9 19:31:02.817: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-ln5ms-user.kubeconfig --namespace=e2e-test-build-pruning-ln5ms -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-pruning/failed-build-config.yaml' buildconfig.build.openshift.io "myphp" created STEP: starting four test builds Jul 9 19:31:03.124: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-ln5ms-user.kubeconfig --namespace=e2e-test-build-pruning-ln5ms myphp -o=name' Jul 9 19:31:03.418: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-1 StdErr> Jul 9 19:31:03.418: INFO: Waiting for myphp-1 to complete Jul 9 19:31:09.524: INFO: WaitForABuild returning with error: The build "myphp-1" status is "Failed" Jul 9 19:31:09.524: INFO: Done waiting for myphp-1: util.BuildResult{BuildPath:"build/myphp-1", BuildName:"myphp-1", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc422112c00), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004def0)} with error: The build "myphp-1" status is "Failed" Jul 9 19:31:09.524: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-ln5ms-user.kubeconfig --namespace=e2e-test-build-pruning-ln5ms myphp -o=name' Jul 9 19:31:09.851: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-2 StdErr> Jul 9 19:31:09.851: INFO: Waiting for myphp-2 to complete Jul 9 19:31:15.944: INFO: WaitForABuild returning with error: The build "myphp-2" status is "Failed" Jul 9 19:31:15.944: INFO: Done waiting for myphp-2: util.BuildResult{BuildPath:"build/myphp-2", BuildName:"myphp-2", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-2", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc42144fb00), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004def0)} with error: The build "myphp-2" status is "Failed" Jul 9 19:31:15.944: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-ln5ms-user.kubeconfig --namespace=e2e-test-build-pruning-ln5ms myphp -o=name' Jul 9 19:31:16.254: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-3 StdErr> Jul 9 19:31:16.255: INFO: Waiting for myphp-3 to complete Jul 9 19:31:22.400: INFO: WaitForABuild returning with error: The build "myphp-3" status is "Failed" Jul 9 19:31:22.400: INFO: Done waiting for myphp-3: util.BuildResult{BuildPath:"build/myphp-3", BuildName:"myphp-3", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-3", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc422113200), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004def0)} with error: The build "myphp-3" status is "Failed" Jul 9 19:31:22.400: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-ln5ms-user.kubeconfig --namespace=e2e-test-build-pruning-ln5ms myphp -o=name' Jul 9 19:31:22.746: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-4 StdErr> Jul 9 19:31:22.746: INFO: Waiting for myphp-4 to complete Jul 9 19:31:28.842: INFO: WaitForABuild returning with error: The build "myphp-4" status is "Failed" Jul 9 19:31:28.842: INFO: Done waiting for myphp-4: util.BuildResult{BuildPath:"build/myphp-4", BuildName:"myphp-4", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-4", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420b2ec00), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004def0)} with error: The build "myphp-4" status is "Failed" STEP: waiting up to one minute for pruning to complete 2 builds exist, retrying...[AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:56 [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:31:28.990: INFO: namespace : e2e-test-build-pruning-ln5ms api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:35.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:35.295 seconds] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:21 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:35 should prune failed builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:108 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:30:52.910: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:30:54.530: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-q8mkr STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:31:13.397: INFO: Container started at 2018-07-09 19:30:56 -0700 PDT, pod became ready at 2018-07-09 19:31:13 -0700 PDT [AfterEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:13.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-q8mkr" for this suite. Jul 9 19:31:35.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:31:39.068: INFO: namespace: e2e-tests-container-probe-q8mkr, resource: bindings, ignored listing per whitelist Jul 9 19:31:39.388: INFO: namespace e2e-tests-container-probe-q8mkr deletion completed in 25.946753096s • [SLOW TEST:46.478 seconds] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 with readiness probe should not be ready before initial delay and never restart [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:31:30.811: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:31:32.841: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-h25h8 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name projected-configmap-test-volume-map-5c80995f-83e9-11e8-881a-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:31:33.868: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c86d89f-83e9-11e8-881a-28d244b00276" in namespace "e2e-tests-projected-h25h8" to be "success or failure" Jul 9 19:31:33.911: INFO: Pod "pod-projected-configmaps-5c86d89f-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 42.401982ms Jul 9 19:31:35.951: INFO: Pod "pod-projected-configmaps-5c86d89f-83e9-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.082536408s STEP: Saw pod success Jul 9 19:31:35.951: INFO: Pod "pod-projected-configmaps-5c86d89f-83e9-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:31:35.994: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-5c86d89f-83e9-11e8-881a-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:31:36.091: INFO: Waiting for pod pod-projected-configmaps-5c86d89f-83e9-11e8-881a-28d244b00276 to disappear Jul 9 19:31:36.131: INFO: Pod pod-projected-configmaps-5c86d89f-83e9-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:36.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h25h8" for this suite. Jul 9 19:31:42.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:31:44.960: INFO: namespace: e2e-tests-projected-h25h8, resource: bindings, ignored listing per whitelist Jul 9 19:31:47.250: INFO: namespace e2e-tests-projected-h25h8 deletion completed in 11.069360411s • [SLOW TEST:16.440 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume with mappings as non-root [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:31:34.655: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:31:36.629: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-rwcdg STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating secret with name secret-test-map-5eadad9f-83e9-11e8-992b-28d244b00276 STEP: Creating a pod to test consume secrets Jul 9 19:31:37.480: INFO: Waiting up to 5m0s for pod "pod-secrets-5eb3b2c9-83e9-11e8-992b-28d244b00276" in namespace "e2e-tests-secrets-rwcdg" to be "success or failure" Jul 9 19:31:37.521: INFO: Pod "pod-secrets-5eb3b2c9-83e9-11e8-992b-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 41.219638ms Jul 9 19:31:39.570: INFO: Pod "pod-secrets-5eb3b2c9-83e9-11e8-992b-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.089801709s STEP: Saw pod success Jul 9 19:31:39.570: INFO: Pod "pod-secrets-5eb3b2c9-83e9-11e8-992b-28d244b00276" satisfied condition "success or failure" Jul 9 19:31:39.608: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-secrets-5eb3b2c9-83e9-11e8-992b-28d244b00276 container secret-volume-test: STEP: delete the pod Jul 9 19:31:39.724: INFO: Waiting for pod pod-secrets-5eb3b2c9-83e9-11e8-992b-28d244b00276 to disappear Jul 9 19:31:39.769: INFO: Pod pod-secrets-5eb3b2c9-83e9-11e8-992b-28d244b00276 no longer exists [AfterEach] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:39.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rwcdg" for this suite. Jul 9 19:31:45.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:31:48.477: INFO: namespace: e2e-tests-secrets-rwcdg, resource: bindings, ignored listing per whitelist Jul 9 19:31:50.349: INFO: namespace e2e-tests-secrets-rwcdg deletion completed in 10.538238075s • [SLOW TEST:15.694 seconds] [sig-storage] Secrets /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] Projected should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:31:39.389: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:31:41.059: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-jdb85 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:31:41.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61591225-83e9-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-jdb85" to be "success or failure" Jul 9 19:31:41.957: INFO: Pod "downwardapi-volume-61591225-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 31.183619ms Jul 9 19:31:43.991: INFO: Pod "downwardapi-volume-61591225-83e9-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064877135s STEP: Saw pod success Jul 9 19:31:43.991: INFO: Pod "downwardapi-volume-61591225-83e9-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:31:44.026: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-61591225-83e9-11e8-8fe2-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:31:44.125: INFO: Waiting for pod downwardapi-volume-61591225-83e9-11e8-8fe2-28d244b00276 to disappear Jul 9 19:31:44.164: INFO: Pod downwardapi-volume-61591225-83e9-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:44.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jdb85" for this suite. Jul 9 19:31:50.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:31:52.978: INFO: namespace: e2e-tests-projected-jdb85, resource: bindings, ignored listing per whitelist Jul 9 19:31:54.230: INFO: namespace e2e-tests-projected-jdb85 deletion completed in 10.026091698s • [SLOW TEST:14.841 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Downward API volume should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:86 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:31:47.252: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:31:49.404: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-47qnf STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:86 STEP: Creating a pod to test downward API volume plugin Jul 9 19:31:50.308: INFO: Waiting up to 5m0s for pod "metadata-volume-665986bd-83e9-11e8-881a-28d244b00276" in namespace "e2e-tests-downward-api-47qnf" to be "success or failure" Jul 9 19:31:50.347: INFO: Pod "metadata-volume-665986bd-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 39.310804ms Jul 9 19:31:52.387: INFO: Pod "metadata-volume-665986bd-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079399908s Jul 9 19:31:54.442: INFO: Pod "metadata-volume-665986bd-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134179942s Jul 9 19:31:56.643: INFO: Pod "metadata-volume-665986bd-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335405774s Jul 9 19:31:58.690: INFO: Pod "metadata-volume-665986bd-83e9-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.382341606s STEP: Saw pod success Jul 9 19:31:58.690: INFO: Pod "metadata-volume-665986bd-83e9-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:31:58.744: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod metadata-volume-665986bd-83e9-11e8-881a-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:31:59.096: INFO: Waiting for pod metadata-volume-665986bd-83e9-11e8-881a-28d244b00276 to disappear Jul 9 19:31:59.145: INFO: Pod metadata-volume-665986bd-83e9-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:31:59.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-47qnf" for this suite. Jul 9 19:32:05.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:32:09.552: INFO: namespace: e2e-tests-downward-api-47qnf, resource: bindings, ignored listing per whitelist Jul 9 19:32:10.607: INFO: namespace e2e-tests-downward-api-47qnf deletion completed in 11.39858442s • [SLOW TEST:23.356 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:86 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:31:54.231: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:31:56.310: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-stl2s STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name configmap-test-volume-6aaca3b3-83e9-11e8-8fe2-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:31:57.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ab190e2-83e9-11e8-8fe2-28d244b00276" in namespace "e2e-tests-configmap-stl2s" to be "success or failure" Jul 9 19:31:57.631: INFO: Pod "pod-configmaps-6ab190e2-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.369817ms Jul 9 19:31:59.685: INFO: Pod "pod-configmaps-6ab190e2-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087510835s Jul 9 19:32:01.720: INFO: Pod "pod-configmaps-6ab190e2-83e9-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122757329s STEP: Saw pod success Jul 9 19:32:01.720: INFO: Pod "pod-configmaps-6ab190e2-83e9-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:32:01.772: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-6ab190e2-83e9-11e8-8fe2-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:32:02.188: INFO: Waiting for pod pod-configmaps-6ab190e2-83e9-11e8-8fe2-28d244b00276 to disappear Jul 9 19:32:02.225: INFO: Pod pod-configmaps-6ab190e2-83e9-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:02.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-stl2s" for this suite. Jul 9 19:32:08.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:32:12.505: INFO: namespace: e2e-tests-configmap-stl2s, resource: bindings, ignored listing per whitelist Jul 9 19:32:12.602: INFO: namespace e2e-tests-configmap-stl2s deletion completed in 10.304435061s • [SLOW TEST:18.371 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:32:10.610: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:32:12.897: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-tssrb STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir volume type on node default medium Jul 9 19:32:13.935: INFO: Waiting up to 5m0s for pod "pod-746dfcca-83e9-11e8-881a-28d244b00276" in namespace "e2e-tests-emptydir-tssrb" to be "success or failure" Jul 9 19:32:13.976: INFO: Pod "pod-746dfcca-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 41.400191ms Jul 9 19:32:16.023: INFO: Pod "pod-746dfcca-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088021952s Jul 9 19:32:18.089: INFO: Pod "pod-746dfcca-83e9-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153936214s STEP: Saw pod success Jul 9 19:32:18.089: INFO: Pod "pod-746dfcca-83e9-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:32:18.137: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-746dfcca-83e9-11e8-881a-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:32:18.272: INFO: Waiting for pod pod-746dfcca-83e9-11e8-881a-28d244b00276 to disappear Jul 9 19:32:18.312: INFO: Pod pod-746dfcca-83e9-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:18.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tssrb" for this suite. Jul 9 19:32:24.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:32:27.119: INFO: namespace: e2e-tests-emptydir-tssrb, resource: bindings, ignored listing per whitelist Jul 9 19:32:29.345: INFO: namespace e2e-tests-emptydir-tssrb deletion completed in 10.978140513s • [SLOW TEST:18.735 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should provide container's memory request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:32:12.604: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:32:14.337: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-78mg7 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide container's memory request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:32:15.246: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75378baa-83e9-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-78mg7" to be "success or failure" Jul 9 19:32:15.276: INFO: Pod "downwardapi-volume-75378baa-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.993707ms Jul 9 19:32:17.318: INFO: Pod "downwardapi-volume-75378baa-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07266931s Jul 9 19:32:19.365: INFO: Pod "downwardapi-volume-75378baa-83e9-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119038548s STEP: Saw pod success Jul 9 19:32:19.365: INFO: Pod "downwardapi-volume-75378baa-83e9-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:32:19.399: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-75378baa-83e9-11e8-8fe2-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:32:19.481: INFO: Waiting for pod downwardapi-volume-75378baa-83e9-11e8-8fe2-28d244b00276 to disappear Jul 9 19:32:19.518: INFO: Pod downwardapi-volume-75378baa-83e9-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:19.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-78mg7" for this suite. Jul 9 19:32:25.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:32:29.200: INFO: namespace: e2e-tests-projected-78mg7, resource: bindings, ignored listing per whitelist Jul 9 19:32:29.535: INFO: namespace e2e-tests-projected-78mg7 deletion completed in 9.965322463s • [SLOW TEST:16.931 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide container's memory request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using one of the plugins 'redhat/openshift-ovs-subnet' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:444 Jul 9 19:32:29.537: INFO: Not using one of the specified plugins [AfterEach] when using one of the plugins 'redhat/openshift-ovs-subnet' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 [AfterEach] when using one of the plugins 'redhat/openshift-ovs-subnet' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:29.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] multicast /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:21 when using one of the plugins 'redhat/openshift-ovs-subnet' /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:442 should block multicast traffic [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:31 Jul 9 19:32:29.537: Not using one of the specified plugins /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ SS ------------------------------ [Feature:Builds][timing] capture build stages and durations should record build stages and durations for s2i [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:58 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:31:35.063: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:31:36.782: INFO: configPath is now "/tmp/e2e-test-build-timing-vrrgw-user.kubeconfig" Jul 9 19:31:36.782: INFO: The user is now "e2e-test-build-timing-vrrgw-user" Jul 9 19:31:36.782: INFO: Creating project "e2e-test-build-timing-vrrgw" Jul 9 19:31:37.020: INFO: Waiting on permissions in project "e2e-test-build-timing-vrrgw" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:41 Jul 9 19:31:37.084: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:45 STEP: waiting for builder service account [It] should record build stages and durations for s2i [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:58 STEP: creating test image stream Jul 9 19:31:37.221: INFO: Running 'oc create --config=/tmp/e2e-test-build-timing-vrrgw-user.kubeconfig --namespace=e2e-test-build-timing-vrrgw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-timing/test-is.json' imagestream.image.openshift.io "test" created STEP: creating test build config Jul 9 19:31:37.580: INFO: Running 'oc create --config=/tmp/e2e-test-build-timing-vrrgw-user.kubeconfig --namespace=e2e-test-build-timing-vrrgw -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-timing/test-s2i-build.json' buildconfig.build.openshift.io "test" created STEP: starting the test source build Jul 9 19:31:37.872: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-timing-vrrgw-user.kubeconfig --namespace=e2e-test-build-timing-vrrgw test --from-dir /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-timing/s2i-binary-dir -o=name' Jul 9 19:31:40.365: INFO: start-build output with args [test --from-dir /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-timing/s2i-binary-dir -o=name]: Error> StdOut> build/test-1 StdErr> Uploading directory "/tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-timing/s2i-binary-dir" as binary input for the build ... Jul 9 19:31:40.366: INFO: Waiting for test-1 to complete Jul 9 19:32:31.543: INFO: Done waiting for test-1: util.BuildResult{BuildPath:"build/test-1", BuildName:"test-1", StartBuildStdErr:"Uploading directory \"/tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-timing/s2i-binary-dir\" as binary input for the build ...", StartBuildStdOut:"build/test-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc420986000), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc421048000)} with error: [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:51 [AfterEach] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:32:31.639: INFO: namespace : e2e-test-build-timing-vrrgw api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:37.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:62.644 seconds] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:29 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:40 should record build stages and durations for s2i [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:58 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:32:37.708: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:37.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:37.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should allow connections from pods in the default namespace to a service in another namespace on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:56 Jul 9 19:32:37.708: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:28:28.791: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:28:30.459: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-vgxrz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-vgxrz Jul 9 19:28:35.230: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-vgxrz STEP: checking the pod's current state and verifying that restartCount is present Jul 9 19:28:35.273: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:36.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vgxrz" for this suite. Jul 9 19:32:42.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:32:45.072: INFO: namespace: e2e-tests-container-probe-vgxrz, resource: bindings, ignored listing per whitelist Jul 9 19:32:45.576: INFO: namespace e2e-tests-container-probe-vgxrz deletion completed in 9.432725927s • [SLOW TEST:256.786 seconds] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Feature:Builds][pruning] prune builds based on settings in the buildconfig [Conformance] buildconfigs should not have a default history limit set when created via the legacy api [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:310 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:32:37.711: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:32:39.318: INFO: configPath is now "/tmp/e2e-test-build-pruning-fslmd-user.kubeconfig" Jul 9 19:32:39.318: INFO: The user is now "e2e-test-build-pruning-fslmd-user" Jul 9 19:32:39.318: INFO: Creating project "e2e-test-build-pruning-fslmd" Jul 9 19:32:39.597: INFO: Waiting on permissions in project "e2e-test-build-pruning-fslmd" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:37 Jul 9 19:32:39.655: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:41 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:32:39.798: INFO: Running scan #0 Jul 9 19:32:39.798: INFO: Checking language ruby Jul 9 19:32:39.844: INFO: Checking tag 2.0 Jul 9 19:32:39.844: INFO: Checking tag 2.2 Jul 9 19:32:39.844: INFO: Checking tag 2.3 Jul 9 19:32:39.844: INFO: Checking tag 2.4 Jul 9 19:32:39.844: INFO: Checking tag 2.5 Jul 9 19:32:39.844: INFO: Checking tag latest Jul 9 19:32:39.844: INFO: Checking language nodejs Jul 9 19:32:39.888: INFO: Checking tag 0.10 Jul 9 19:32:39.888: INFO: Checking tag 4 Jul 9 19:32:39.888: INFO: Checking tag 6 Jul 9 19:32:39.888: INFO: Checking tag 8 Jul 9 19:32:39.888: INFO: Checking tag latest Jul 9 19:32:39.888: INFO: Checking language perl Jul 9 19:32:39.938: INFO: Checking tag 5.24 Jul 9 19:32:39.938: INFO: Checking tag latest Jul 9 19:32:39.938: INFO: Checking tag 5.16 Jul 9 19:32:39.938: INFO: Checking tag 5.20 Jul 9 19:32:39.938: INFO: Checking language php Jul 9 19:32:39.987: INFO: Checking tag 7.0 Jul 9 19:32:39.987: INFO: Checking tag 7.1 Jul 9 19:32:39.987: INFO: Checking tag latest Jul 9 19:32:39.987: INFO: Checking tag 5.5 Jul 9 19:32:39.987: INFO: Checking tag 5.6 Jul 9 19:32:39.987: INFO: Checking language python Jul 9 19:32:40.049: INFO: Checking tag 3.3 Jul 9 19:32:40.049: INFO: Checking tag 3.4 Jul 9 19:32:40.049: INFO: Checking tag 3.5 Jul 9 19:32:40.049: INFO: Checking tag 3.6 Jul 9 19:32:40.049: INFO: Checking tag latest Jul 9 19:32:40.049: INFO: Checking tag 2.7 Jul 9 19:32:40.049: INFO: Checking language wildfly Jul 9 19:32:40.093: INFO: Checking tag 10.0 Jul 9 19:32:40.093: INFO: Checking tag 10.1 Jul 9 19:32:40.093: INFO: Checking tag 11.0 Jul 9 19:32:40.093: INFO: Checking tag 12.0 Jul 9 19:32:40.093: INFO: Checking tag 8.1 Jul 9 19:32:40.093: INFO: Checking tag 9.0 Jul 9 19:32:40.093: INFO: Checking tag latest Jul 9 19:32:40.093: INFO: Checking language mysql Jul 9 19:32:40.133: INFO: Checking tag latest Jul 9 19:32:40.133: INFO: Checking tag 5.5 Jul 9 19:32:40.133: INFO: Checking tag 5.6 Jul 9 19:32:40.134: INFO: Checking tag 5.7 Jul 9 19:32:40.134: INFO: Checking language postgresql Jul 9 19:32:40.174: INFO: Checking tag 9.2 Jul 9 19:32:40.174: INFO: Checking tag 9.4 Jul 9 19:32:40.174: INFO: Checking tag 9.5 Jul 9 19:32:40.174: INFO: Checking tag 9.6 Jul 9 19:32:40.174: INFO: Checking tag latest Jul 9 19:32:40.174: INFO: Checking language mongodb Jul 9 19:32:40.213: INFO: Checking tag 2.6 Jul 9 19:32:40.213: INFO: Checking tag 3.2 Jul 9 19:32:40.213: INFO: Checking tag 3.4 Jul 9 19:32:40.213: INFO: Checking tag latest Jul 9 19:32:40.213: INFO: Checking tag 2.4 Jul 9 19:32:40.213: INFO: Checking language jenkins Jul 9 19:32:40.253: INFO: Checking tag 1 Jul 9 19:32:40.253: INFO: Checking tag 2 Jul 9 19:32:40.253: INFO: Checking tag latest Jul 9 19:32:40.253: INFO: Success! STEP: creating test image stream Jul 9 19:32:40.253: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-fslmd-user.kubeconfig --namespace=e2e-test-build-pruning-fslmd -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-pruning/imagestream.yaml' imagestream.image.openshift.io "myphp" created [It] [Conformance] buildconfigs should not have a default history limit set when created via the legacy api [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:310 STEP: creating a build config with the legacy api Jul 9 19:32:40.685: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-fslmd-user.kubeconfig --namespace=e2e-test-build-pruning-fslmd -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/build-pruning/default-legacy-build-config.yaml --raw=/oapi/v1/namespaces/e2e-test-build-pruning-fslmd/buildconfigs' {"kind":"BuildConfig","apiVersion":"v1","metadata":{"name":"myphp","namespace":"e2e-test-build-pruning-fslmd","selfLink":"/oapi/v1/namespaces/e2e-test-build-pruning-fslmd/buildconfigs/myphp","uid":"8487d2d0-83e9-11e8-aa51-0af96768d57e","resourceVersion":"89643","creationTimestamp":"2018-07-10T02:32:40Z"},"spec":{"triggers":[],"runPolicy":"Serial","source":{"type":"Git","git":{"uri":"https://github.com/openshift/cakephp-ex.git"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"ImageStreamTag","namespace":"openshift","name":"php:7.0"}}},"output":{},"resources":{},"postCommit":{},"nodeSelector":null},"status":{"lastVersion":0}} [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:56 [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:32:41.073: INFO: namespace : e2e-test-build-pruning-fslmd api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:47.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:9.446 seconds] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:21 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:35 [Conformance] buildconfigs should not have a default history limit set when created via the legacy api [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:310 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:32:45.580: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:32:47.594: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-4wtg7 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 9 19:32:48.309: INFO: Waiting up to 5m0s for pod "pod-88eda0c9-83e9-11e8-8401-28d244b00276" in namespace "e2e-tests-emptydir-4wtg7" to be "success or failure" Jul 9 19:32:48.341: INFO: Pod "pod-88eda0c9-83e9-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.587694ms Jul 9 19:32:50.373: INFO: Pod "pod-88eda0c9-83e9-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064794189s STEP: Saw pod success Jul 9 19:32:50.373: INFO: Pod "pod-88eda0c9-83e9-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:32:50.406: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-88eda0c9-83e9-11e8-8401-28d244b00276 container test-container: STEP: delete the pod Jul 9 19:32:50.483: INFO: Waiting for pod pod-88eda0c9-83e9-11e8-8401-28d244b00276 to disappear Jul 9 19:32:50.511: INFO: Pod pod-88eda0c9-83e9-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:50.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4wtg7" for this suite. Jul 9 19:32:56.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:32:58.757: INFO: namespace: e2e-tests-emptydir-4wtg7, resource: bindings, ignored listing per whitelist Jul 9 19:33:00.118: INFO: namespace e2e-tests-emptydir-4wtg7 deletion completed in 9.564298608s • [SLOW TEST:14.538 seconds] [sig-storage] EmptyDir volumes /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:100 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:32:47.160: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:32:48.910: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-6qpmn STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:100 STEP: Creating configMap with name configmap-test-volume-map-89afb49a-83e9-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:32:49.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-89b44d7f-83e9-11e8-bd2e-28d244b00276" in namespace "e2e-tests-configmap-6qpmn" to be "success or failure" Jul 9 19:32:49.666: INFO: Pod "pod-configmaps-89b44d7f-83e9-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 55.757669ms Jul 9 19:32:51.695: INFO: Pod "pod-configmaps-89b44d7f-83e9-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085624038s STEP: Saw pod success Jul 9 19:32:51.695: INFO: Pod "pod-configmaps-89b44d7f-83e9-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:32:51.724: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-89b44d7f-83e9-11e8-bd2e-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:32:51.790: INFO: Waiting for pod pod-configmaps-89b44d7f-83e9-11e8-bd2e-28d244b00276 to disappear Jul 9 19:32:51.819: INFO: Pod pod-configmaps-89b44d7f-83e9-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:51.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6qpmn" for this suite. Jul 9 19:32:57.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:33:00.290: INFO: namespace: e2e-tests-configmap-6qpmn, resource: bindings, ignored listing per whitelist Jul 9 19:33:01.338: INFO: namespace e2e-tests-configmap-6qpmn deletion completed in 9.461898274s • [SLOW TEST:14.178 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:100 ------------------------------ [Feature:Builds][pruning] prune builds based on settings in the buildconfig should prune errored builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:198 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:32:29.540: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:32:31.323: INFO: configPath is now "/tmp/e2e-test-build-pruning-p64nl-user.kubeconfig" Jul 9 19:32:31.323: INFO: The user is now "e2e-test-build-pruning-p64nl-user" Jul 9 19:32:31.323: INFO: Creating project "e2e-test-build-pruning-p64nl" Jul 9 19:32:31.480: INFO: Waiting on permissions in project "e2e-test-build-pruning-p64nl" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:37 Jul 9 19:32:31.538: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:41 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:32:31.753: INFO: Running scan #0 Jul 9 19:32:31.753: INFO: Checking language ruby Jul 9 19:32:31.819: INFO: Checking tag 2.5 Jul 9 19:32:31.819: INFO: Checking tag latest Jul 9 19:32:31.819: INFO: Checking tag 2.0 Jul 9 19:32:31.819: INFO: Checking tag 2.2 Jul 9 19:32:31.819: INFO: Checking tag 2.3 Jul 9 19:32:31.819: INFO: Checking tag 2.4 Jul 9 19:32:31.819: INFO: Checking language nodejs Jul 9 19:32:31.872: INFO: Checking tag 6 Jul 9 19:32:31.872: INFO: Checking tag 8 Jul 9 19:32:31.872: INFO: Checking tag latest Jul 9 19:32:31.872: INFO: Checking tag 0.10 Jul 9 19:32:31.872: INFO: Checking tag 4 Jul 9 19:32:31.872: INFO: Checking language perl Jul 9 19:32:31.923: INFO: Checking tag 5.16 Jul 9 19:32:31.923: INFO: Checking tag 5.20 Jul 9 19:32:31.923: INFO: Checking tag 5.24 Jul 9 19:32:31.923: INFO: Checking tag latest Jul 9 19:32:31.923: INFO: Checking language php Jul 9 19:32:31.965: INFO: Checking tag 5.5 Jul 9 19:32:31.965: INFO: Checking tag 5.6 Jul 9 19:32:31.965: INFO: Checking tag 7.0 Jul 9 19:32:31.965: INFO: Checking tag 7.1 Jul 9 19:32:31.965: INFO: Checking tag latest Jul 9 19:32:31.965: INFO: Checking language python Jul 9 19:32:32.027: INFO: Checking tag 2.7 Jul 9 19:32:32.028: INFO: Checking tag 3.3 Jul 9 19:32:32.028: INFO: Checking tag 3.4 Jul 9 19:32:32.028: INFO: Checking tag 3.5 Jul 9 19:32:32.028: INFO: Checking tag 3.6 Jul 9 19:32:32.028: INFO: Checking tag latest Jul 9 19:32:32.028: INFO: Checking language wildfly Jul 9 19:32:32.117: INFO: Checking tag 8.1 Jul 9 19:32:32.117: INFO: Checking tag 9.0 Jul 9 19:32:32.117: INFO: Checking tag latest Jul 9 19:32:32.117: INFO: Checking tag 10.0 Jul 9 19:32:32.117: INFO: Checking tag 10.1 Jul 9 19:32:32.117: INFO: Checking tag 11.0 Jul 9 19:32:32.117: INFO: Checking tag 12.0 Jul 9 19:32:32.117: INFO: Checking language mysql Jul 9 19:32:32.151: INFO: Checking tag latest Jul 9 19:32:32.151: INFO: Checking tag 5.5 Jul 9 19:32:32.151: INFO: Checking tag 5.6 Jul 9 19:32:32.151: INFO: Checking tag 5.7 Jul 9 19:32:32.151: INFO: Checking language postgresql Jul 9 19:32:32.194: INFO: Checking tag 9.6 Jul 9 19:32:32.194: INFO: Checking tag latest Jul 9 19:32:32.194: INFO: Checking tag 9.2 Jul 9 19:32:32.194: INFO: Checking tag 9.4 Jul 9 19:32:32.194: INFO: Checking tag 9.5 Jul 9 19:32:32.194: INFO: Checking language mongodb Jul 9 19:32:32.256: INFO: Checking tag 2.4 Jul 9 19:32:32.256: INFO: Checking tag 2.6 Jul 9 19:32:32.256: INFO: Checking tag 3.2 Jul 9 19:32:32.256: INFO: Checking tag 3.4 Jul 9 19:32:32.256: INFO: Checking tag latest Jul 9 19:32:32.256: INFO: Checking language jenkins Jul 9 19:32:32.312: INFO: Checking tag 1 Jul 9 19:32:32.312: INFO: Checking tag 2 Jul 9 19:32:32.312: INFO: Checking tag latest Jul 9 19:32:32.312: INFO: Success! STEP: creating test image stream Jul 9 19:32:32.312: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-p64nl-user.kubeconfig --namespace=e2e-test-build-pruning-p64nl -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/build-pruning/imagestream.yaml' imagestream.image.openshift.io "myphp" created [It] should prune errored builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:198 STEP: creating test failed build config Jul 9 19:32:32.577: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-p64nl-user.kubeconfig --namespace=e2e-test-build-pruning-p64nl -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/builds/build-pruning/errored-build-config.yaml' buildconfig.build.openshift.io "myphp" created STEP: starting four test builds Jul 9 19:32:32.913: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-p64nl-user.kubeconfig --namespace=e2e-test-build-pruning-p64nl myphp -o=name' Jul 9 19:32:33.360: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-1 StdErr> Jul 9 19:32:33.361: INFO: Waiting for myphp-1 to complete Jul 9 19:32:39.434: INFO: WaitForABuild returning with error: The build "myphp-1" status is "Error" Jul 9 19:32:39.434: INFO: Done waiting for myphp-1: util.BuildResult{BuildPath:"build/myphp-1", BuildName:"myphp-1", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4207fb800), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e3e00)} with error: The build "myphp-1" status is "Error" Jul 9 19:32:39.435: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-p64nl-user.kubeconfig --namespace=e2e-test-build-pruning-p64nl myphp -o=name' Jul 9 19:32:39.742: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-2 StdErr> Jul 9 19:32:39.743: INFO: Waiting for myphp-2 to complete Jul 9 19:32:45.847: INFO: WaitForABuild returning with error: The build "myphp-2" status is "Error" Jul 9 19:32:45.847: INFO: Done waiting for myphp-2: util.BuildResult{BuildPath:"build/myphp-2", BuildName:"myphp-2", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-2", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421de3200), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e3e00)} with error: The build "myphp-2" status is "Error" Jul 9 19:32:45.847: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-p64nl-user.kubeconfig --namespace=e2e-test-build-pruning-p64nl myphp -o=name' Jul 9 19:32:46.186: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-3 StdErr> Jul 9 19:32:46.186: INFO: Waiting for myphp-3 to complete Jul 9 19:32:52.271: INFO: WaitForABuild returning with error: The build "myphp-3" status is "Error" Jul 9 19:32:52.271: INFO: Done waiting for myphp-3: util.BuildResult{BuildPath:"build/myphp-3", BuildName:"myphp-3", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-3", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc422378300), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e3e00)} with error: The build "myphp-3" status is "Error" Jul 9 19:32:52.271: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-p64nl-user.kubeconfig --namespace=e2e-test-build-pruning-p64nl myphp -o=name' Jul 9 19:32:52.571: INFO: start-build output with args [myphp -o=name]: Error> StdOut> build/myphp-4 StdErr> Jul 9 19:32:52.572: INFO: Waiting for myphp-4 to complete Jul 9 19:32:58.654: INFO: WaitForABuild returning with error: The build "myphp-4" status is "Error" Jul 9 19:32:58.654: INFO: Done waiting for myphp-4: util.BuildResult{BuildPath:"build/myphp-4", BuildName:"myphp-4", StartBuildStdErr:"", StartBuildStdOut:"build/myphp-4", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421de2000), BuildAttempt:true, BuildSuccess:false, BuildFailure:true, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e3e00)} with error: The build "myphp-4" status is "Error" STEP: waiting up to one minute for pruning to complete 2 builds exist, retrying...[AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:56 [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:32:58.787: INFO: namespace : e2e-test-build-pruning-p64nl api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:33:04.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:35.325 seconds] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:21 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:35 should prune errored builds based on the failedBuildsHistoryLimit setting [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:198 ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:32:29.349: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:32:31.651: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-q8ngq STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-q8ngq Jul 9 19:32:34.831: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-q8ngq STEP: checking the pod's current state and verifying that restartCount is present Jul 9 19:32:34.886: INFO: Initial restart count of pod liveness-http is 0 Jul 9 19:32:55.407: INFO: Restart count of pod e2e-tests-container-probe-q8ngq/liveness-http is now 1 (20.521438793s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:32:55.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-q8ngq" for this suite. Jul 9 19:33:01.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:33:05.320: INFO: namespace: e2e-tests-container-probe-q8ngq, resource: bindings, ignored listing per whitelist Jul 9 19:33:06.664: INFO: namespace e2e-tests-container-probe-q8ngq deletion completed in 11.161050308s • [SLOW TEST:37.315 seconds] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should be restarted with a /healthz http liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:06.666: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:33:08.849: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-8dp2g STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:33:09.715: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95ad37f1-83e9-11e8-881a-28d244b00276" in namespace "e2e-tests-projected-8dp2g" to be "success or failure" Jul 9 19:33:09.756: INFO: Pod "downwardapi-volume-95ad37f1-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 41.45551ms Jul 9 19:33:11.806: INFO: Pod "downwardapi-volume-95ad37f1-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09150203s Jul 9 19:33:13.851: INFO: Pod "downwardapi-volume-95ad37f1-83e9-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135812359s STEP: Saw pod success Jul 9 19:33:13.851: INFO: Pod "downwardapi-volume-95ad37f1-83e9-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:33:13.893: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-95ad37f1-83e9-11e8-881a-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:33:13.993: INFO: Waiting for pod downwardapi-volume-95ad37f1-83e9-11e8-881a-28d244b00276 to disappear Jul 9 19:33:14.035: INFO: Pod downwardapi-volume-95ad37f1-83e9-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:33:14.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8dp2g" for this suite. Jul 9 19:33:20.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:33:23.514: INFO: namespace: e2e-tests-projected-8dp2g, resource: bindings, ignored listing per whitelist Jul 9 19:33:25.402: INFO: namespace e2e-tests-projected-8dp2g deletion completed in 11.317885897s • [SLOW TEST:18.737 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should set mode on item file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [sig-storage] Projected should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:00.122: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:33:01.803: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-tsmxn STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating the pod Jul 9 19:33:05.400: INFO: Successfully updated pod "labelsupdate916f79fd-83e9-11e8-8401-28d244b00276" [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:33:07.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tsmxn" for this suite. Jul 9 19:33:29.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:33:32.811: INFO: namespace: e2e-tests-projected-tsmxn, resource: bindings, ignored listing per whitelist Jul 9 19:33:33.085: INFO: namespace e2e-tests-projected-tsmxn deletion completed in 25.578463872s • [SLOW TEST:32.963 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should update labels on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Area:Networking] services when using a plugin that does not isolate namespaces by default should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:407 [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:01.340: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-services1-cjnkf STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:03.382: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-services2-6znx6 STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 Jul 9 19:33:05.514: INFO: Using ip-10-0-130-54.us-west-2.compute.internal for test ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) Jul 9 19:33:07.643: INFO: Target pod IP:port is 10.2.2.24:8080 Jul 9 19:33:07.798: INFO: Target service IP:port is 10.3.33.4:8080 Jul 9 19:33:07.798: INFO: Creating an exec pod on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:33:07.798: INFO: Creating new exec pod Jul 9 19:33:11.926: INFO: Waiting up to 10s to wget 10.3.33.4:8080 Jul 9 19:33:11.926: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-tests-net-services2-6znx6 execpod-sourceip-ip-10-0-130-54.us-west-2.compute.internalvhwsx -- /bin/sh -c wget -T 30 -qO- 10.3.33.4:8080' Jul 9 19:33:12.557: INFO: stderr: "" Jul 9 19:33:12.557: INFO: Cleaning up the exec pod [AfterEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:33:12.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-cjnkf" for this suite. Jul 9 19:33:24.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:33:28.146: INFO: namespace: e2e-tests-net-services1-cjnkf, resource: bindings, ignored listing per whitelist Jul 9 19:33:28.411: INFO: namespace e2e-tests-net-services1-cjnkf deletion completed in 15.638854362s [AfterEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:33:28.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services2-6znx6" for this suite. Jul 9 19:33:34.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:33:37.805: INFO: namespace: e2e-tests-net-services2-6znx6, resource: bindings, ignored listing per whitelist Jul 9 19:33:37.956: INFO: namespace e2e-tests-net-services2-6znx6 deletion completed in 9.50788237s • [SLOW TEST:36.616 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:406 should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:27 ------------------------------ SS ------------------------------ [sig-storage] Projected should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:25.406: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:33:27.629: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-qcxjd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name projected-configmap-test-volume-a0d92c3d-83e9-11e8-881a-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:33:28.506: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a0e0659b-83e9-11e8-881a-28d244b00276" in namespace "e2e-tests-projected-qcxjd" to be "success or failure" Jul 9 19:33:28.548: INFO: Pod "pod-projected-configmaps-a0e0659b-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 41.721315ms Jul 9 19:33:30.591: INFO: Pod "pod-projected-configmaps-a0e0659b-83e9-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.084922778s STEP: Saw pod success Jul 9 19:33:30.591: INFO: Pod "pod-projected-configmaps-a0e0659b-83e9-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:33:30.637: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-a0e0659b-83e9-11e8-881a-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:33:30.739: INFO: Waiting for pod pod-projected-configmaps-a0e0659b-83e9-11e8-881a-28d244b00276 to disappear Jul 9 19:33:30.778: INFO: Pod pod-projected-configmaps-a0e0659b-83e9-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:33:30.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qcxjd" for this suite. Jul 9 19:33:36.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:33:40.308: INFO: namespace: e2e-tests-projected-qcxjd, resource: bindings, ignored listing per whitelist Jul 9 19:33:42.001: INFO: namespace e2e-tests-projected-qcxjd deletion completed in 11.174486262s • [SLOW TEST:16.595 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [Feature:Builds] buildconfig secret injector should inject secrets to the appropriate buildconfigs [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/buildconfigsecretinjector.go:36 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds] buildconfig secret injector /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:33.086: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds] buildconfig secret injector /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:33:34.936: INFO: configPath is now "/tmp/e2e-test-buildconfigsecretinjector-hwmqx-user.kubeconfig" Jul 9 19:33:34.936: INFO: The user is now "e2e-test-buildconfigsecretinjector-hwmqx-user" Jul 9 19:33:34.936: INFO: Creating project "e2e-test-buildconfigsecretinjector-hwmqx" Jul 9 19:33:35.070: INFO: Waiting on permissions in project "e2e-test-buildconfigsecretinjector-hwmqx" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/buildconfigsecretinjector.go:19 Jul 9 19:33:35.131: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/buildconfigsecretinjector.go:23 STEP: creating buildconfigs Jul 9 19:33:35.131: INFO: Running 'oc create --config=/tmp/e2e-test-buildconfigsecretinjector-hwmqx-user.kubeconfig --namespace=e2e-test-buildconfigsecretinjector-hwmqx -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/test-buildconfigsecretinjector.yaml' secret "secret1" created secret "secret2" created secret "secret3" created buildconfig.build.openshift.io "test1" created buildconfig.build.openshift.io "test2" created buildconfig.build.openshift.io "test3" created buildconfig.build.openshift.io "test4" created [It] should inject secrets to the appropriate buildconfigs [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/buildconfigsecretinjector.go:36 Jul 9 19:33:35.748: INFO: Running 'oc get --config=/tmp/e2e-test-buildconfigsecretinjector-hwmqx-user.kubeconfig --namespace=e2e-test-buildconfigsecretinjector-hwmqx bc/test1 -o template --template {{.spec.source.sourceSecret.name}}' Jul 9 19:33:36.001: INFO: Running 'oc get --config=/tmp/e2e-test-buildconfigsecretinjector-hwmqx-user.kubeconfig --namespace=e2e-test-buildconfigsecretinjector-hwmqx bc/test2 -o template --template {{.spec.source.sourceSecret.name}}' Jul 9 19:33:36.274: INFO: Running 'oc get --config=/tmp/e2e-test-buildconfigsecretinjector-hwmqx-user.kubeconfig --namespace=e2e-test-buildconfigsecretinjector-hwmqx bc/test3 -o template --template {{.spec.source.sourceSecret.name}}' Jul 9 19:33:36.516: INFO: Running 'oc get --config=/tmp/e2e-test-buildconfigsecretinjector-hwmqx-user.kubeconfig --namespace=e2e-test-buildconfigsecretinjector-hwmqx bc/test4 -o template --template {{.spec.source.sourceSecret.name}}' [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/buildconfigsecretinjector.go:29 [AfterEach] [Feature:Builds] buildconfig secret injector /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:33:36.880: INFO: namespace : e2e-test-buildconfigsecretinjector-hwmqx api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds] buildconfig secret injector /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:33:42.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:9.875 seconds] [Feature:Builds] buildconfig secret injector /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/buildconfigsecretinjector.go:10 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/buildconfigsecretinjector.go:18 should inject secrets to the appropriate buildconfigs [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/buildconfigsecretinjector.go:36 ------------------------------ SSS ------------------------------ [sig-storage] Projected should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:921 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:42.964: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:33:44.551: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-xrpjs STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:921 STEP: Creating a pod to test downward API volume plugin Jul 9 19:33:45.297: INFO: Waiting up to 5m0s for pod "metadata-volume-aae59204-83e9-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-xrpjs" to be "success or failure" Jul 9 19:33:45.327: INFO: Pod "metadata-volume-aae59204-83e9-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.287377ms Jul 9 19:33:47.359: INFO: Pod "metadata-volume-aae59204-83e9-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061794214s STEP: Saw pod success Jul 9 19:33:47.359: INFO: Pod "metadata-volume-aae59204-83e9-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:33:47.387: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod metadata-volume-aae59204-83e9-11e8-8401-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:33:47.460: INFO: Waiting for pod metadata-volume-aae59204-83e9-11e8-8401-28d244b00276 to disappear Jul 9 19:33:47.487: INFO: Pod metadata-volume-aae59204-83e9-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:33:47.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xrpjs" for this suite. Jul 9 19:33:53.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:33:55.484: INFO: namespace: e2e-tests-projected-xrpjs, resource: bindings, ignored listing per whitelist Jul 9 19:33:57.700: INFO: namespace e2e-tests-projected-xrpjs deletion completed in 10.16435378s • [SLOW TEST:14.736 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:921 ------------------------------ [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:37.961: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:33:39.599: INFO: configPath is now "/tmp/e2e-test-weighted-router-xl5p5-user.kubeconfig" Jul 9 19:33:39.599: INFO: The user is now "e2e-test-weighted-router-xl5p5-user" Jul 9 19:33:39.599: INFO: Creating project "e2e-test-weighted-router-xl5p5" Jul 9 19:33:39.763: INFO: Waiting on permissions in project "e2e-test-weighted-router-xl5p5" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:29 Jul 9 19:33:39.801: INFO: Running 'oc new-app --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-weighted-router-xl5p5 -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/weighted-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "e2e-test-weighted-router-xl5p5/" for "/tmp/fixture-testdata-dir225659500/test/extended/testdata/weighted-router.yaml" to project e2e-test-weighted-router-xl5p5 * With parameters: * IMAGE=openshift/origin-haproxy-router --> Creating resources ... pod "weighted-router" created rolebinding "system-router" created route "weightedroute" created route "zeroweightroute" created service "weightedendpoints1" created service "weightedendpoints2" created pod "endpoint-1" created pod "endpoint-2" created pod "endpoint-3" created --> Success Access your application via route 'weighted.example.com' Access your application via route 'zeroweight.example.com' Run 'oc status' to view your app. [It] should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 Jul 9 19:33:40.899: INFO: Creating new exec pod STEP: creating a weighted router from a config file "/tmp/fixture-testdata-dir225659500/test/extended/testdata/weighted-router.yaml" STEP: waiting for the healthz endpoint to respond Jul 9 19:33:44.018: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-weighted-router-xl5p5 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.2.2.30' "http://10.2.2.30:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jul 9 19:33:44.707: INFO: stderr: "" STEP: checking that 100 requests go through successfully Jul 9 19:33:44.707: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-weighted-router-xl5p5 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://10.2.2.30" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jul 9 19:33:47.336: INFO: stderr: "" Jul 9 19:33:47.336: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-weighted-router-xl5p5 execpod -- /bin/sh -c set -e for i in $(seq 1 100); do code=$( curl -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://10.2.2.30" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -ne 200 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi done ' Jul 9 19:33:48.785: INFO: stderr: "" STEP: checking that there are three weighted backends in the router stats Jul 9 19:33:48.785: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-weighted-router-xl5p5 execpod -- /bin/sh -c curl -s -u admin:password --header 'Host: weighted.example.com' "http://10.2.2.30:1936/;csv"' Jul 9 19:33:49.680: INFO: stderr: "" STEP: checking that zero weights are also respected by the router Jul 9 19:33:49.680: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-weighted-router-xl5p5 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: zeroweight.example.com' "http://10.2.2.30"' Jul 9 19:33:50.287: INFO: stderr: "" [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:33:50.394: INFO: namespace : e2e-test-weighted-router-xl5p5 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:04.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:26.537 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:22 The HAProxy router /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:38 should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:39 ------------------------------ [sig-storage] Projected should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:34:04.499: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:34:06.136: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-fdpt4 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name projected-configmap-test-volume-map-b7b8d583-83e9-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:34:06.860: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276" in namespace "e2e-tests-projected-fdpt4" to be "success or failure" Jul 9 19:34:06.889: INFO: Pod "pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 28.602731ms Jul 9 19:34:08.916: INFO: Pod "pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056299754s Jul 9 19:34:10.945: INFO: Pod "pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084378035s Jul 9 19:34:12.973: INFO: Pod "pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112948482s STEP: Saw pod success Jul 9 19:34:12.973: INFO: Pod "pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:34:13.037: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:34:13.102: INFO: Waiting for pod pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276 to disappear Jul 9 19:34:13.136: INFO: Pod pod-projected-configmaps-b7bf79b7-83e9-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:13.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fdpt4" for this suite. Jul 9 19:34:19.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:34:21.333: INFO: namespace: e2e-tests-projected-fdpt4, resource: bindings, ignored listing per whitelist Jul 9 19:34:22.799: INFO: namespace e2e-tests-projected-fdpt4 deletion completed in 9.626871542s • [SLOW TEST:18.300 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-api-machinery] ConfigMap should be consumable via environment variable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:34:22.802: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:34:24.319: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-6hvcl STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap e2e-tests-configmap-6hvcl/configmap-test-c2922dda-83e9-11e8-bd2e-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:34:25.048: INFO: Waiting up to 5m0s for pod "pod-configmaps-c296f815-83e9-11e8-bd2e-28d244b00276" in namespace "e2e-tests-configmap-6hvcl" to be "success or failure" Jul 9 19:34:25.075: INFO: Pod "pod-configmaps-c296f815-83e9-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 27.757896ms Jul 9 19:34:27.107: INFO: Pod "pod-configmaps-c296f815-83e9-11e8-bd2e-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05966566s Jul 9 19:34:29.136: INFO: Pod "pod-configmaps-c296f815-83e9-11e8-bd2e-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088496756s STEP: Saw pod success Jul 9 19:34:29.136: INFO: Pod "pod-configmaps-c296f815-83e9-11e8-bd2e-28d244b00276" satisfied condition "success or failure" Jul 9 19:34:29.164: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-c296f815-83e9-11e8-bd2e-28d244b00276 container env-test: STEP: delete the pod Jul 9 19:34:29.240: INFO: Waiting for pod pod-configmaps-c296f815-83e9-11e8-bd2e-28d244b00276 to disappear Jul 9 19:34:29.275: INFO: Pod pod-configmaps-c296f815-83e9-11e8-bd2e-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:29.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6hvcl" for this suite. Jul 9 19:34:35.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:34:36.925: INFO: namespace: e2e-tests-configmap-6hvcl, resource: bindings, ignored listing per whitelist Jul 9 19:34:38.781: INFO: namespace e2e-tests-configmap-6hvcl deletion completed in 9.46636901s • [SLOW TEST:15.979 seconds] [sig-api-machinery] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:29 should be consumable via environment variable [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:04.869: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:33:06.511: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-xmnxs STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:33:07.193: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node STEP: Creating configMap with name cm-test-opt-del-94352d07-83e9-11e8-8fe2-28d244b00276 STEP: Creating configMap with name cm-test-opt-upd-94352d35-83e9-11e8-8fe2-28d244b00276 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-94352d07-83e9-11e8-8fe2-28d244b00276 STEP: Updating configmap cm-test-opt-upd-94352d35-83e9-11e8-8fe2-28d244b00276 STEP: Creating configMap with name cm-test-opt-create-94352d46-83e9-11e8-8fe2-28d244b00276 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:15.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xmnxs" for this suite. Jul 9 19:34:37.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:34:41.022: INFO: namespace: e2e-tests-projected-xmnxs, resource: bindings, ignored listing per whitelist Jul 9 19:34:41.607: INFO: namespace e2e-tests-projected-xmnxs deletion completed in 25.91004017s • [SLOW TEST:96.738 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ SS ------------------------------ [Feature:DeploymentConfig] deploymentconfigs keep the deployer pod invariant valid [Conformance] should deal with config change in case the deployment is still running [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1310 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:57.701: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:33:59.561: INFO: configPath is now "/tmp/e2e-test-cli-deployment-dcsvg-user.kubeconfig" Jul 9 19:33:59.561: INFO: The user is now "e2e-test-cli-deployment-dcsvg-user" Jul 9 19:33:59.561: INFO: Creating project "e2e-test-cli-deployment-dcsvg" Jul 9 19:33:59.696: INFO: Waiting on permissions in project "e2e-test-cli-deployment-dcsvg" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should deal with config change in case the deployment is still running [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1310 STEP: creating DC STEP: waiting for RC to be created STEP: waiting for deployer pod to be running STEP: redeploying immediately by config change [AfterEach] keep the deployer pod invariant valid [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1236 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:34:16.049: INFO: namespace : e2e-test-cli-deployment-dcsvg api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:54.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:56.427 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 keep the deployer pod invariant valid [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1233 should deal with config change in case the deployment is still running [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1310 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:34:54.130: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:54.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:54.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should prevent connections to pods in different namespaces on different nodes via service IPs [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:44 Jul 9 19:34:54.130: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs when changing image change trigger [Conformance] should successfully trigger from an updated image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:389 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:33:42.003: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:33:44.510: INFO: configPath is now "/tmp/e2e-test-cli-deployment-ksfmz-user.kubeconfig" Jul 9 19:33:44.510: INFO: The user is now "e2e-test-cli-deployment-ksfmz-user" Jul 9 19:33:44.510: INFO: Creating project "e2e-test-cli-deployment-ksfmz" Jul 9 19:33:44.766: INFO: Waiting on permissions in project "e2e-test-cli-deployment-ksfmz" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should successfully trigger from an updated image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:389 STEP: tagging the busybox:latest as test:v1 image Jul 9 19:33:45.270: INFO: Running 'oc tag --config=/tmp/e2e-test-cli-deployment-ksfmz-user.kubeconfig --namespace=e2e-test-cli-deployment-ksfmz docker.io/busybox:latest test:v1' STEP: ensuring the deployment config latest version is 1 and rollout completed Jul 9 19:33:53.301: INFO: Latest rollout of dc/example (rc/example-1) is complete. STEP: updating the image change trigger to point to test:v2 image Jul 9 19:33:53.301: INFO: Running 'oc set --config=/tmp/e2e-test-cli-deployment-ksfmz-user.kubeconfig --namespace=e2e-test-cli-deployment-ksfmz triggers dc/example --remove-all' Jul 9 19:33:53.619: INFO: Running 'oc set --config=/tmp/e2e-test-cli-deployment-ksfmz-user.kubeconfig --namespace=e2e-test-cli-deployment-ksfmz triggers dc/example --from-image test:v2 --auto -c test' STEP: tagging the busybox:1.25 as test:v2 image Jul 9 19:33:54.087: INFO: Running 'oc tag --config=/tmp/e2e-test-cli-deployment-ksfmz-user.kubeconfig --namespace=e2e-test-cli-deployment-ksfmz docker.io/busybox:1.25 test:v2' STEP: ensuring the deployment config latest version is 2 and rollout completed Jul 9 19:34:10.075: INFO: Latest rollout of dc/example (rc/example-2) is complete. [AfterEach] when changing image change trigger [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:385 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:34:12.166: INFO: namespace : e2e-test-cli-deployment-ksfmz api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:54.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:72.248 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 when changing image change trigger [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:383 should successfully trigger from an updated image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:389 ------------------------------ [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables should successfully resolve valueFrom in docker build environment variables [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:83 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:34:38.783: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:34:40.390: INFO: configPath is now "/tmp/e2e-test-build-valuefrom-xmz29-user.kubeconfig" Jul 9 19:34:40.390: INFO: The user is now "e2e-test-build-valuefrom-xmz29-user" Jul 9 19:34:40.390: INFO: Creating project "e2e-test-build-valuefrom-xmz29" Jul 9 19:34:40.545: INFO: Waiting on permissions in project "e2e-test-build-valuefrom-xmz29" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:27 Jul 9 19:34:40.702: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:38 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:34:40.839: INFO: Running scan #0 Jul 9 19:34:40.839: INFO: Checking language ruby Jul 9 19:34:40.886: INFO: Checking tag 2.3 Jul 9 19:34:40.886: INFO: Checking tag 2.4 Jul 9 19:34:40.886: INFO: Checking tag 2.5 Jul 9 19:34:40.886: INFO: Checking tag latest Jul 9 19:34:40.886: INFO: Checking tag 2.0 Jul 9 19:34:40.886: INFO: Checking tag 2.2 Jul 9 19:34:40.886: INFO: Checking language nodejs Jul 9 19:34:40.931: INFO: Checking tag 6 Jul 9 19:34:40.931: INFO: Checking tag 8 Jul 9 19:34:40.931: INFO: Checking tag latest Jul 9 19:34:40.931: INFO: Checking tag 0.10 Jul 9 19:34:40.931: INFO: Checking tag 4 Jul 9 19:34:40.931: INFO: Checking language perl Jul 9 19:34:40.971: INFO: Checking tag 5.16 Jul 9 19:34:40.971: INFO: Checking tag 5.20 Jul 9 19:34:40.971: INFO: Checking tag 5.24 Jul 9 19:34:40.971: INFO: Checking tag latest Jul 9 19:34:40.971: INFO: Checking language php Jul 9 19:34:41.034: INFO: Checking tag 5.6 Jul 9 19:34:41.034: INFO: Checking tag 7.0 Jul 9 19:34:41.034: INFO: Checking tag 7.1 Jul 9 19:34:41.034: INFO: Checking tag latest Jul 9 19:34:41.034: INFO: Checking tag 5.5 Jul 9 19:34:41.034: INFO: Checking language python Jul 9 19:34:41.079: INFO: Checking tag 2.7 Jul 9 19:34:41.079: INFO: Checking tag 3.3 Jul 9 19:34:41.079: INFO: Checking tag 3.4 Jul 9 19:34:41.079: INFO: Checking tag 3.5 Jul 9 19:34:41.079: INFO: Checking tag 3.6 Jul 9 19:34:41.079: INFO: Checking tag latest Jul 9 19:34:41.079: INFO: Checking language wildfly Jul 9 19:34:41.133: INFO: Checking tag 10.1 Jul 9 19:34:41.133: INFO: Checking tag 11.0 Jul 9 19:34:41.133: INFO: Checking tag 12.0 Jul 9 19:34:41.133: INFO: Checking tag 8.1 Jul 9 19:34:41.133: INFO: Checking tag 9.0 Jul 9 19:34:41.133: INFO: Checking tag latest Jul 9 19:34:41.133: INFO: Checking tag 10.0 Jul 9 19:34:41.133: INFO: Checking language mysql Jul 9 19:34:41.173: INFO: Checking tag 5.6 Jul 9 19:34:41.173: INFO: Checking tag 5.7 Jul 9 19:34:41.173: INFO: Checking tag latest Jul 9 19:34:41.173: INFO: Checking tag 5.5 Jul 9 19:34:41.173: INFO: Checking language postgresql Jul 9 19:34:41.239: INFO: Checking tag 9.2 Jul 9 19:34:41.239: INFO: Checking tag 9.4 Jul 9 19:34:41.239: INFO: Checking tag 9.5 Jul 9 19:34:41.239: INFO: Checking tag 9.6 Jul 9 19:34:41.239: INFO: Checking tag latest Jul 9 19:34:41.239: INFO: Checking language mongodb Jul 9 19:34:41.279: INFO: Checking tag 2.6 Jul 9 19:34:41.279: INFO: Checking tag 3.2 Jul 9 19:34:41.279: INFO: Checking tag 3.4 Jul 9 19:34:41.279: INFO: Checking tag latest Jul 9 19:34:41.279: INFO: Checking tag 2.4 Jul 9 19:34:41.279: INFO: Checking language jenkins Jul 9 19:34:41.335: INFO: Checking tag latest Jul 9 19:34:41.335: INFO: Checking tag 1 Jul 9 19:34:41.335: INFO: Checking tag 2 Jul 9 19:34:41.335: INFO: Success! STEP: creating test image stream Jul 9 19:34:41.335: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-xmz29-user.kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-is.json' imagestream.image.openshift.io "test" created STEP: creating test secret Jul 9 19:34:41.698: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-xmz29-user.kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-secret.yaml' secret "mysecret" created STEP: creating test configmap Jul 9 19:34:42.048: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-xmz29-user.kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/test-configmap.yaml' configmap "myconfigmap" created [It] should successfully resolve valueFrom in docker build environment variables [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:83 STEP: creating test successful build config Jul 9 19:34:42.364: INFO: Running 'oc create --config=/tmp/e2e-test-build-valuefrom-xmz29-user.kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 -f /tmp/fixture-testdata-dir225659500/test/extended/testdata/builds/valuefrom/successful-docker-build-value-from-config.yaml' buildconfig.build.openshift.io "mydockertest" created STEP: starting test build Jul 9 19:34:42.642: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-valuefrom-xmz29-user.kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 mydockertest -o=name' Jul 9 19:34:42.936: INFO: start-build output with args [mydockertest -o=name]: Error> StdOut> build/mydockertest-1 StdErr> Jul 9 19:34:42.937: INFO: Waiting for mydockertest-1 to complete Jul 9 19:34:49.022: INFO: Done waiting for mydockertest-1: util.BuildResult{BuildPath:"build/mydockertest-1", BuildName:"mydockertest-1", StartBuildStdErr:"", StartBuildStdOut:"build/mydockertest-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4216e2300), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42096a1e0)} with error: Jul 9 19:34:49.022: INFO: Running 'oc logs --config=/tmp/e2e-test-build-valuefrom-xmz29-user.kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 -f build/mydockertest-1 --timestamps' [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:31 Jul 9 19:34:49.341: INFO: Dumping pod state for namespace e2e-test-build-valuefrom-xmz29 Jul 9 19:34:49.341: INFO: Running 'oc get --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 pods -o yaml' Jul 9 19:34:49.635: INFO: apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: openshift.io/build.name: mydockertest-1 openshift.io/scc: privileged creationTimestamp: 2018-07-10T02:34:43Z labels: openshift.io/build.name: mydockertest-1 name: mydockertest-1-build namespace: e2e-test-build-valuefrom-xmz29 ownerReferences: - apiVersion: build.openshift.io/v1 controller: true kind: Build name: mydockertest-1 uid: cd40e02b-83e9-11e8-aa51-0af96768d57e resourceVersion: "91630" selfLink: /api/v1/namespaces/e2e-test-build-valuefrom-xmz29/pods/mydockertest-1-build uid: cd5d5fbb-83e9-11e8-84c6-0af96768d57e spec: containers: - args: - --loglevel=5 command: - openshift-docker-build env: - name: BUILD value: | {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"mydockertest-1","namespace":"e2e-test-build-valuefrom-xmz29","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-valuefrom-xmz29/builds/mydockertest-1","uid":"cd40e02b-83e9-11e8-aa51-0af96768d57e","resourceVersion":"91556","creationTimestamp":"2018-07-10T02:34:42Z","labels":{"buildconfig":"mydockertest","name":"test","openshift.io/build-config.name":"mydockertest","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"mydockertest","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"mydockertest","uid":"cd106d06-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"},"strategy":{"type":"Docker","dockerStrategy":{"env":[{"name":"BUILD_LOGLEVEL","value":"5"},{"name":"FIELDREF_ENV","value":"mydockertest-1"},{"name":"CONFIGMAPKEYREF_ENV","value":"myvalue"},{"name":"SECRETKEYREF_ENV","value":"developer"},{"name":"FIELDREF_CLONE_ENV","value":"mydockertest-1"},{"name":"FIELDREF_CLONE_CLONE_ENV","value":"mydockertest-1"},{"name":"UNAVAILABLE_ENV","value":"$(SOME_OTHER_ENV)"},{"name":"ESCAPED_ENV","value":"$(MY_ESCAPED_VALUE)"}]}},"output":{"to":{"kind":"DockerImage","name":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest"},"pushSecret":{"name":"builder-dockercfg-mzjph"},"imageLabels":[{"name":"user-specified-label","value":"arbitrary-value"}]},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-valuefrom-xmz29","name":"mydockertest"},"output":{}}} - name: BUILD_LOGLEVEL value: "5" - name: PUSH_DOCKERCFG_PATH value: /var/run/secrets/openshift.io/push image: openshift/origin-docker-builder:latest imagePullPolicy: IfNotPresent name: docker-build resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /tmp/build name: buildworkdir - mountPath: /var/run/docker.sock name: docker-socket - mountPath: /var/run/crio/crio.sock name: crio-socket - mountPath: /var/run/secrets/openshift.io/push name: builder-dockercfg-mzjph-push readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: builder-token-52sjc readOnly: true dnsPolicy: ClusterFirst imagePullSecrets: - name: builder-dockercfg-mzjph initContainers: - args: - --loglevel=5 command: - openshift-manage-dockerfile env: - name: BUILD value: | {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"mydockertest-1","namespace":"e2e-test-build-valuefrom-xmz29","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-valuefrom-xmz29/builds/mydockertest-1","uid":"cd40e02b-83e9-11e8-aa51-0af96768d57e","resourceVersion":"91556","creationTimestamp":"2018-07-10T02:34:42Z","labels":{"buildconfig":"mydockertest","name":"test","openshift.io/build-config.name":"mydockertest","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"mydockertest","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"mydockertest","uid":"cd106d06-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"},"strategy":{"type":"Docker","dockerStrategy":{"env":[{"name":"BUILD_LOGLEVEL","value":"5"},{"name":"FIELDREF_ENV","value":"mydockertest-1"},{"name":"CONFIGMAPKEYREF_ENV","value":"myvalue"},{"name":"SECRETKEYREF_ENV","value":"developer"},{"name":"FIELDREF_CLONE_ENV","value":"mydockertest-1"},{"name":"FIELDREF_CLONE_CLONE_ENV","value":"mydockertest-1"},{"name":"UNAVAILABLE_ENV","value":"$(SOME_OTHER_ENV)"},{"name":"ESCAPED_ENV","value":"$(MY_ESCAPED_VALUE)"}]}},"output":{"to":{"kind":"DockerImage","name":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest"},"pushSecret":{"name":"builder-dockercfg-mzjph"},"imageLabels":[{"name":"user-specified-label","value":"arbitrary-value"}]},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-valuefrom-xmz29","name":"mydockertest"},"output":{}}} - name: BUILD_LOGLEVEL value: "5" image: openshift/origin-docker-builder:latest imagePullPolicy: IfNotPresent name: manage-dockerfile resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /tmp/build name: buildworkdir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: builder-token-52sjc readOnly: true nodeName: ip-10-0-130-54.us-west-2.compute.internal restartPolicy: Never schedulerName: default-scheduler securityContext: {} serviceAccount: builder serviceAccountName: builder terminationGracePeriodSeconds: 30 volumes: - emptyDir: {} name: buildworkdir - hostPath: path: /var/run/docker.sock type: "" name: docker-socket - hostPath: path: /var/run/crio/crio.sock type: "" name: crio-socket - name: builder-dockercfg-mzjph-push secret: defaultMode: 384 secretName: builder-dockercfg-mzjph - name: builder-token-52sjc secret: defaultMode: 420 secretName: builder-token-52sjc status: conditions: - lastProbeTime: null lastTransitionTime: 2018-07-10T02:34:45Z reason: PodCompleted status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2018-07-10T02:34:47Z reason: PodCompleted status: "False" type: Ready - lastProbeTime: null lastTransitionTime: null reason: PodCompleted status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: 2018-07-10T02:34:43Z status: "True" type: PodScheduled containerStatuses: - containerID: docker://4a03154020f97181e4de82a772844866de7e5df4c0ff0fb1acf9299b43588405 image: openshift/origin-docker-builder:latest imageID: docker-pullable://openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 lastState: {} name: docker-build ready: false restartCount: 0 state: terminated: containerID: docker://4a03154020f97181e4de82a772844866de7e5df4c0ff0fb1acf9299b43588405 exitCode: 0 finishedAt: 2018-07-10T02:34:46Z reason: Completed startedAt: 2018-07-10T02:34:45Z hostIP: 10.0.130.54 initContainerStatuses: - containerID: docker://67747d4b2c4d30839d83c3843ea50a93807c8c1434d99406c2205b918f5e62fc image: openshift/origin-docker-builder:latest imageID: docker-pullable://openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 lastState: {} name: manage-dockerfile ready: true restartCount: 0 state: terminated: containerID: docker://67747d4b2c4d30839d83c3843ea50a93807c8c1434d99406c2205b918f5e62fc exitCode: 0 finishedAt: 2018-07-10T02:34:44Z reason: Completed startedAt: 2018-07-10T02:34:43Z phase: Succeeded podIP: 10.2.2.48 qosClass: BestEffort startTime: 2018-07-10T02:34:43Z kind: List metadata: resourceVersion: "" selfLink: "" Jul 9 19:34:49.672: INFO: Running 'oc describe --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 pod/mydockertest-1-build' Jul 9 19:34:49.972: INFO: Describing pod "mydockertest-1-build" Name: mydockertest-1-build Namespace: e2e-test-build-valuefrom-xmz29 Node: ip-10-0-130-54.us-west-2.compute.internal/10.0.130.54 Start Time: Mon, 09 Jul 2018 19:34:43 -0700 Labels: openshift.io/build.name=mydockertest-1 Annotations: openshift.io/build.name=mydockertest-1 openshift.io/scc=privileged Status: Succeeded IP: 10.2.2.48 Controlled By: Build/mydockertest-1 Init Containers: manage-dockerfile: Container ID: docker://67747d4b2c4d30839d83c3843ea50a93807c8c1434d99406c2205b918f5e62fc Image: openshift/origin-docker-builder:latest Image ID: docker-pullable://openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 Port: Host Port: Command: openshift-manage-dockerfile Args: --loglevel=5 State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 09 Jul 2018 19:34:43 -0700 Finished: Mon, 09 Jul 2018 19:34:44 -0700 Ready: True Restart Count: 0 Environment: BUILD: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"mydockertest-1","namespace":"e2e-test-build-valuefrom-xmz29","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-valuefrom-xmz29/builds/mydockertest-1","uid":"cd40e02b-83e9-11e8-aa51-0af96768d57e","resourceVersion":"91556","creationTimestamp":"2018-07-10T02:34:42Z","labels":{"buildconfig":"mydockertest","name":"test","openshift.io/build-config.name":"mydockertest","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"mydockertest","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"mydockertest","uid":"cd106d06-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"},"strategy":{"type":"Docker","dockerStrategy":{"env":[{"name":"BUILD_LOGLEVEL","value":"5"},{"name":"FIELDREF_ENV","value":"mydockertest-1"},{"name":"CONFIGMAPKEYREF_ENV","value":"myvalue"},{"name":"SECRETKEYREF_ENV","value":"developer"},{"name":"FIELDREF_CLONE_ENV","value":"mydockertest-1"},{"name":"FIELDREF_CLONE_CLONE_ENV","value":"mydockertest-1"},{"name":"UNAVAILABLE_ENV","value":"$(SOME_OTHER_ENV)"},{"name":"ESCAPED_ENV","value":"$(MY_ESCAPED_VALUE)"}]}},"output":{"to":{"kind":"DockerImage","name":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest"},"pushSecret":{"name":"builder-dockercfg-mzjph"},"imageLabels":[{"name":"user-specified-label","value":"arbitrary-value"}]},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-valuefrom-xmz29","name":"mydockertest"},"output":{}}} BUILD_LOGLEVEL: 5 Mounts: /tmp/build from buildworkdir (rw) /var/run/secrets/kubernetes.io/serviceaccount from builder-token-52sjc (ro) Containers: docker-build: Container ID: docker://4a03154020f97181e4de82a772844866de7e5df4c0ff0fb1acf9299b43588405 Image: openshift/origin-docker-builder:latest Image ID: docker-pullable://openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 Port: Host Port: Command: openshift-docker-build Args: --loglevel=5 State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 09 Jul 2018 19:34:45 -0700 Finished: Mon, 09 Jul 2018 19:34:46 -0700 Ready: False Restart Count: 0 Environment: BUILD: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"mydockertest-1","namespace":"e2e-test-build-valuefrom-xmz29","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-valuefrom-xmz29/builds/mydockertest-1","uid":"cd40e02b-83e9-11e8-aa51-0af96768d57e","resourceVersion":"91556","creationTimestamp":"2018-07-10T02:34:42Z","labels":{"buildconfig":"mydockertest","name":"test","openshift.io/build-config.name":"mydockertest","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"mydockertest","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"mydockertest","uid":"cd106d06-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"},"strategy":{"type":"Docker","dockerStrategy":{"env":[{"name":"BUILD_LOGLEVEL","value":"5"},{"name":"FIELDREF_ENV","value":"mydockertest-1"},{"name":"CONFIGMAPKEYREF_ENV","value":"myvalue"},{"name":"SECRETKEYREF_ENV","value":"developer"},{"name":"FIELDREF_CLONE_ENV","value":"mydockertest-1"},{"name":"FIELDREF_CLONE_CLONE_ENV","value":"mydockertest-1"},{"name":"UNAVAILABLE_ENV","value":"$(SOME_OTHER_ENV)"},{"name":"ESCAPED_ENV","value":"$(MY_ESCAPED_VALUE)"}]}},"output":{"to":{"kind":"DockerImage","name":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest"},"pushSecret":{"name":"builder-dockercfg-mzjph"},"imageLabels":[{"name":"user-specified-label","value":"arbitrary-value"}]},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-valuefrom-xmz29","name":"mydockertest"},"output":{}}} BUILD_LOGLEVEL: 5 PUSH_DOCKERCFG_PATH: /var/run/secrets/openshift.io/push Mounts: /tmp/build from buildworkdir (rw) /var/run/crio/crio.sock from crio-socket (rw) /var/run/docker.sock from docker-socket (rw) /var/run/secrets/kubernetes.io/serviceaccount from builder-token-52sjc (ro) /var/run/secrets/openshift.io/push from builder-dockercfg-mzjph-push (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: buildworkdir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: docker-socket: Type: HostPath (bare host directory volume) Path: /var/run/docker.sock HostPathType: crio-socket: Type: HostPath (bare host directory volume) Path: /var/run/crio/crio.sock HostPathType: builder-dockercfg-mzjph-push: Type: Secret (a volume populated by a Secret) SecretName: builder-dockercfg-mzjph Optional: false builder-token-52sjc: Type: Secret (a volume populated by a Secret) SecretName: builder-token-52sjc Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6s default-scheduler Successfully assigned e2e-test-build-valuefrom-xmz29/mydockertest-1-build to ip-10-0-130-54.us-west-2.compute.internal Normal Pulled 6s kubelet, ip-10-0-130-54.us-west-2.compute.internal Container image "openshift/origin-docker-builder:latest" already present on machine Normal Created 6s kubelet, ip-10-0-130-54.us-west-2.compute.internal Created container Normal Started 6s kubelet, ip-10-0-130-54.us-west-2.compute.internal Started container Normal Pulled 4s kubelet, ip-10-0-130-54.us-west-2.compute.internal Container image "openshift/origin-docker-builder:latest" already present on machine Normal Created 4s kubelet, ip-10-0-130-54.us-west-2.compute.internal Created container Normal Started 4s kubelet, ip-10-0-130-54.us-west-2.compute.internal Started container Jul 9 19:34:49.972: INFO: Running 'oc logs --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 pod/mydockertest-1-build -c manage-dockerfile -n e2e-test-build-valuefrom-xmz29' Jul 9 19:34:50.333: INFO: Log for pod "mydockertest-1-build"/"manage-dockerfile" ----> I0710 02:34:44.067088 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"mydockertest-1","namespace":"e2e-test-build-valuefrom-xmz29","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-valuefrom-xmz29/builds/mydockertest-1","uid":"cd40e02b-83e9-11e8-aa51-0af96768d57e","resourceVersion":"91556","creationTimestamp":"2018-07-10T02:34:42Z","labels":{"buildconfig":"mydockertest","name":"test","openshift.io/build-config.name":"mydockertest","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"mydockertest","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"mydockertest","uid":"cd106d06-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"},"strategy":{"type":"Docker","dockerStrategy":{"env":[{"name":"BUILD_LOGLEVEL","value":"5"},{"name":"FIELDREF_ENV","value":"mydockertest-1"},{"name":"CONFIGMAPKEYREF_ENV","value":"myvalue"},{"name":"SECRETKEYREF_ENV","value":"developer"},{"name":"FIELDREF_CLONE_ENV","value":"mydockertest-1"},{"name":"FIELDREF_CLONE_CLONE_ENV","value":"mydockertest-1"},{"name":"UNAVAILABLE_ENV","value":"$(SOME_OTHER_ENV)"},{"name":"ESCAPED_ENV","value":"$(MY_ESCAPED_VALUE)"}]}},"output":{"to":{"kind":"DockerImage","name":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest"},"pushSecret":{"name":"builder-dockercfg-mzjph"},"imageLabels":[{"name":"user-specified-label","value":"arbitrary-value"}]},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-valuefrom-xmz29","name":"mydockertest"},"output":{}}} I0710 02:34:44.067863 1 builder.go:289] Checking for presence of a Dockerfile I0710 02:34:44.068308 1 source.go:123] Replacing dockerfile FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6 with: FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6 ENV "BUILD_LOGLEVEL"="5" "FIELDREF_ENV"="mydockertest-1" "CONFIGMAPKEYREF_ENV"="myvalue" "SECRETKEYREF_ENV"="developer" "FIELDREF_CLONE_ENV"="mydockertest-1" "FIELDREF_CLONE_CLONE_ENV"="mydockertest-1" "UNAVAILABLE_ENV"="$(SOME_OTHER_ENV)" "ESCAPED_ENV"="$(MY_ESCAPED_VALUE)" ENV "OPENSHIFT_BUILD_NAME"="mydockertest-1" "OPENSHIFT_BUILD_NAMESPACE"="e2e-test-build-valuefrom-xmz29" LABEL "io.openshift.build.name"="mydockertest-1" "io.openshift.build.namespace"="e2e-test-build-valuefrom-xmz29" "user-specified-label"="arbitrary-value" <----end of log for "mydockertest-1-build"/"manage-dockerfile" Jul 9 19:34:50.333: INFO: Running 'oc logs --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-build-valuefrom-xmz29 pod/mydockertest-1-build -c docker-build -n e2e-test-build-valuefrom-xmz29' Jul 9 19:34:50.657: INFO: Log for pod "mydockertest-1-build"/"docker-build" ----> I0710 02:34:45.619095 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"mydockertest-1","namespace":"e2e-test-build-valuefrom-xmz29","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-valuefrom-xmz29/builds/mydockertest-1","uid":"cd40e02b-83e9-11e8-aa51-0af96768d57e","resourceVersion":"91556","creationTimestamp":"2018-07-10T02:34:42Z","labels":{"buildconfig":"mydockertest","name":"test","openshift.io/build-config.name":"mydockertest","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"mydockertest","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"mydockertest","uid":"cd106d06-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"},"strategy":{"type":"Docker","dockerStrategy":{"env":[{"name":"BUILD_LOGLEVEL","value":"5"},{"name":"FIELDREF_ENV","value":"mydockertest-1"},{"name":"CONFIGMAPKEYREF_ENV","value":"myvalue"},{"name":"SECRETKEYREF_ENV","value":"developer"},{"name":"FIELDREF_CLONE_ENV","value":"mydockertest-1"},{"name":"FIELDREF_CLONE_CLONE_ENV","value":"mydockertest-1"},{"name":"UNAVAILABLE_ENV","value":"$(SOME_OTHER_ENV)"},{"name":"ESCAPED_ENV","value":"$(MY_ESCAPED_VALUE)"}]}},"output":{"to":{"kind":"DockerImage","name":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest"},"pushSecret":{"name":"builder-dockercfg-mzjph"},"imageLabels":[{"name":"user-specified-label","value":"arbitrary-value"}]},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-valuefrom-xmz29","name":"mydockertest"},"output":{}}} I0710 02:34:45.619770 1 util_linux.go:96] found cgroup parent /kubepods/besteffort/podcd5d5fbb-83e9-11e8-84c6-0af96768d57e I0710 02:34:45.619789 1 builder.go:223] Running build with cgroup limits: api.CGroupLimits{MemoryLimitBytes:92233720368547, CPUShares:0, CPUPeriod:0, CPUQuota:0, MemorySwap:92233720368547, Parent:"/kubepods/besteffort/podcd5d5fbb-83e9-11e8-84c6-0af96768d57e"} I0710 02:34:45.619824 1 builder.go:240] Starting Docker build from build config mydockertest-1 ... I0710 02:34:45.621777 1 docker.go:347] container type= I0710 02:34:45.621891 1 docker.go:385] Invoking Docker build to create "temp.builder.openshift.io/e2e-test-build-valuefrom-xmz29/mydockertest-1:4b5ecb73" I0710 02:34:45.622116 1 tar.go:217] Adding "/tmp/build/inputs" to tar ... I0710 02:34:45.622340 1 tar.go:312] Adding to tar: /tmp/build/inputs/Dockerfile as Dockerfile Step 1/4 : FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6 ---> 2b8fd9751c4c Step 2/4 : ENV "BUILD_LOGLEVEL"="5" "FIELDREF_ENV"="mydockertest-1" "CONFIGMAPKEYREF_ENV"="myvalue" "SECRETKEYREF_ENV"="developer" "FIELDREF_CLONE_ENV"="mydockertest-1" "FIELDREF_CLONE_CLONE_ENV"="mydockertest-1" "UNAVAILABLE_ENV"="$(SOME_OTHER_ENV)" "ESCAPED_ENV"="$(MY_ESCAPED_VALUE)" ---> Using cache ---> 9036ced22b84 Step 3/4 : ENV "OPENSHIFT_BUILD_NAME"="mydockertest-1" "OPENSHIFT_BUILD_NAMESPACE"="e2e-test-build-valuefrom-xmz29" ---> Running in 0decbefaeb4b Removing intermediate container 0decbefaeb4b ---> e5d04e1ec1ac Step 4/4 : LABEL "io.openshift.build.name"="mydockertest-1" "io.openshift.build.namespace"="e2e-test-build-valuefrom-xmz29" "user-specified-label"="arbitrary-value" ---> Running in 45a776a76cc0 Removing intermediate container 45a776a76cc0 ---> 779b6074589e Successfully built 779b6074589e Successfully tagged temp.builder.openshift.io/e2e-test-build-valuefrom-xmz29/mydockertest-1:4b5ecb73 I0710 02:34:45.939085 1 cfg.go:39] Locating docker auth for image docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest and type PUSH_DOCKERCFG_PATH I0710 02:34:45.939113 1 cfg.go:49] Getting docker auth in paths : [/var/run/secrets/openshift.io/push] I0710 02:34:45.939139 1 config.go:131] looking for config.json at /var/run/secrets/openshift.io/push/config.json I0710 02:34:45.939188 1 config.go:101] looking for .dockercfg at /var/run/secrets/openshift.io/push/.dockercfg I0710 02:34:45.939479 1 config.go:112] found .dockercfg at /var/run/secrets/openshift.io/push/.dockercfg I0710 02:34:45.939526 1 cfg.go:62] Using serviceaccount user for Docker authentication for image docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest I0710 02:34:45.939550 1 builder.go:240] Authenticating Docker push with user "serviceaccount" Pushing image docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest ... The push refers to repository [docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test] Preparing Pushing [=> ] 33.79kB/1.093MB Pushing Pushing [==================================================>] 1.201MB Pushing Pushing [==================================================>] 1.293MB Pushing Pushed latest: digest: sha256:3ac406cd64bb7e42448796d24476728453cce0ac0c93a3acf6ff61a56e94018c size: 527 Push successful <----end of log for "mydockertest-1-build"/"docker-build" [AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:34:50.814: INFO: namespace : e2e-test-build-valuefrom-xmz29 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:34:56.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [18.139 seconds] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:13 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:26 should successfully resolve valueFrom in docker build environment variables [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:83 Expected : 2018-07-10T02:34:44.06734938Z I0710 02:34:44.067088 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"mydockertest-1","namespace":"e2e-test-build-valuefrom-xmz29","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-valuefrom-xmz29/builds/mydockertest-1","uid":"cd40e02b-83e9-11e8-aa51-0af96768d57e","resourceVersion":"91556","creationTimestamp":"2018-07-10T02:34:42Z","labels":{"buildconfig":"mydockertest","name":"test","openshift.io/build-config.name":"mydockertest","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"mydockertest","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"mydockertest","uid":"cd106d06-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"},"strategy":{"type":"Docker","dockerStrategy":{"env":[{"name":"BUILD_LOGLEVEL","value":"5"},{"name":"FIELDREF_ENV","value":"mydockertest-1"},{"name":"CONFIGMAPKEYREF_ENV","value":"myvalue"},{"name":"SECRETKEYREF_ENV","value":"developer"},{"name":"FIELDREF_CLONE_ENV","value":"mydockertest-1"},{"name":"FIELDREF_CLONE_CLONE_ENV","value":"mydockertest-1"},{"name":"UNAVAILABLE_ENV","value":"$(SOME_OTHER_ENV)"},{"name":"ESCAPED_ENV","value":"$(MY_ESCAPED_VALUE)"}]}},"output":{"to":{"kind":"DockerImage","name":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest"},"pushSecret":{"name":"builder-dockercfg-mzjph"},"imageLabels":[{"name":"user-specified-label","value":"arbitrary-value"}]},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-valuefrom-xmz29","name":"mydockertest"},"output":{}}} 2018-07-10T02:34:44.067946826Z I0710 02:34:44.067863 1 builder.go:289] Checking for presence of a Dockerfile 2018-07-10T02:34:44.068404971Z I0710 02:34:44.068308 1 source.go:123] Replacing dockerfile 2018-07-10T02:34:44.06841838Z FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6 2018-07-10T02:34:44.068424807Z with: 2018-07-10T02:34:44.068429893Z FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6 2018-07-10T02:34:44.068435307Z ENV "BUILD_LOGLEVEL"="5" "FIELDREF_ENV"="mydockertest-1" "CONFIGMAPKEYREF_ENV"="myvalue" "SECRETKEYREF_ENV"="developer" "FIELDREF_CLONE_ENV"="mydockertest-1" "FIELDREF_CLONE_CLONE_ENV"="mydockertest-1" "UNAVAILABLE_ENV"="$(SOME_OTHER_ENV)" "ESCAPED_ENV"="$(MY_ESCAPED_VALUE)" 2018-07-10T02:34:44.068442355Z ENV "OPENSHIFT_BUILD_NAME"="mydockertest-1" "OPENSHIFT_BUILD_NAMESPACE"="e2e-test-build-valuefrom-xmz29" 2018-07-10T02:34:44.068448015Z LABEL "io.openshift.build.name"="mydockertest-1" "io.openshift.build.namespace"="e2e-test-build-valuefrom-xmz29" "user-specified-label"="arbitrary-value" 2018-07-10T02:34:45.61932637Z I0710 02:34:45.619095 1 builder.go:82] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"mydockertest-1","namespace":"e2e-test-build-valuefrom-xmz29","selfLink":"/apis/build.openshift.io/v1/namespaces/e2e-test-build-valuefrom-xmz29/builds/mydockertest-1","uid":"cd40e02b-83e9-11e8-aa51-0af96768d57e","resourceVersion":"91556","creationTimestamp":"2018-07-10T02:34:42Z","labels":{"buildconfig":"mydockertest","name":"test","openshift.io/build-config.name":"mydockertest","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"mydockertest","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"mydockertest","uid":"cd106d06-83e9-11e8-aa51-0af96768d57e","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"},"strategy":{"type":"Docker","dockerStrategy":{"env":[{"name":"BUILD_LOGLEVEL","value":"5"},{"name":"FIELDREF_ENV","value":"mydockertest-1"},{"name":"CONFIGMAPKEYREF_ENV","value":"myvalue"},{"name":"SECRETKEYREF_ENV","value":"developer"},{"name":"FIELDREF_CLONE_ENV","value":"mydockertest-1"},{"name":"FIELDREF_CLONE_CLONE_ENV","value":"mydockertest-1"},{"name":"UNAVAILABLE_ENV","value":"$(SOME_OTHER_ENV)"},{"name":"ESCAPED_ENV","value":"$(MY_ESCAPED_VALUE)"}]}},"output":{"to":{"kind":"DockerImage","name":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest"},"pushSecret":{"name":"builder-dockercfg-mzjph"},"imageLabels":[{"name":"user-specified-label","value":"arbitrary-value"}]},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-valuefrom-xmz29","name":"mydockertest"},"output":{}}} 2018-07-10T02:34:45.619852908Z I0710 02:34:45.619770 1 util_linux.go:96] found cgroup parent /kubepods/besteffort/podcd5d5fbb-83e9-11e8-84c6-0af96768d57e 2018-07-10T02:34:45.619904166Z I0710 02:34:45.619789 1 builder.go:223] Running build with cgroup limits: api.CGroupLimits{MemoryLimitBytes:92233720368547, CPUShares:0, CPUPeriod:0, CPUQuota:0, MemorySwap:92233720368547, Parent:"/kubepods/besteffort/podcd5d5fbb-83e9-11e8-84c6-0af96768d57e"} 2018-07-10T02:34:45.619915872Z I0710 02:34:45.619824 1 builder.go:240] Starting Docker build from build config mydockertest-1 ... 2018-07-10T02:34:45.621863025Z I0710 02:34:45.621777 1 docker.go:347] container type= 2018-07-10T02:34:45.621946028Z I0710 02:34:45.621891 1 docker.go:385] Invoking Docker build to create "temp.builder.openshift.io/e2e-test-build-valuefrom-xmz29/mydockertest-1:4b5ecb73" 2018-07-10T02:34:45.622190648Z I0710 02:34:45.622116 1 tar.go:217] Adding "/tmp/build/inputs" to tar ... 2018-07-10T02:34:45.622415209Z I0710 02:34:45.622340 1 tar.go:312] Adding to tar: /tmp/build/inputs/Dockerfile as Dockerfile 2018-07-10T02:34:45.671894852Z Step 1/4 : FROM busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6 2018-07-10T02:34:45.672257698Z ---> 2b8fd9751c4c 2018-07-10T02:34:45.672272467Z Step 2/4 : ENV "BUILD_LOGLEVEL"="5" "FIELDREF_ENV"="mydockertest-1" "CONFIGMAPKEYREF_ENV"="myvalue" "SECRETKEYREF_ENV"="developer" "FIELDREF_CLONE_ENV"="mydockertest-1" "FIELDREF_CLONE_CLONE_ENV"="mydockertest-1" "UNAVAILABLE_ENV"="$(SOME_OTHER_ENV)" "ESCAPED_ENV"="$(MY_ESCAPED_VALUE)" 2018-07-10T02:34:45.672605483Z ---> Using cache 2018-07-10T02:34:45.672619895Z ---> 9036ced22b84 2018-07-10T02:34:45.672626582Z Step 3/4 : ENV "OPENSHIFT_BUILD_NAME"="mydockertest-1" "OPENSHIFT_BUILD_NAMESPACE"="e2e-test-build-valuefrom-xmz29" 2018-07-10T02:34:45.704705004Z ---> Running in 0decbefaeb4b 2018-07-10T02:34:45.799309198Z Removing intermediate container 0decbefaeb4b 2018-07-10T02:34:45.799330184Z ---> e5d04e1ec1ac 2018-07-10T02:34:45.799337459Z Step 4/4 : LABEL "io.openshift.build.name"="mydockertest-1" "io.openshift.build.namespace"="e2e-test-build-valuefrom-xmz29" "user-specified-label"="arbitrary-value" 2018-07-10T02:34:45.828483225Z ---> Running in 45a776a76cc0 2018-07-10T02:34:45.922837448Z Removing intermediate container 45a776a76cc0 2018-07-10T02:34:45.922858212Z ---> 779b6074589e 2018-07-10T02:34:45.922865154Z Successfully built 779b6074589e 2018-07-10T02:34:45.92917074Z Successfully tagged temp.builder.openshift.io/e2e-test-build-valuefrom-xmz29/mydockertest-1:4b5ecb73 2018-07-10T02:34:45.939805188Z I0710 02:34:45.939085 1 cfg.go:39] Locating docker auth for image docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest and type PUSH_DOCKERCFG_PATH 2018-07-10T02:34:45.939823294Z I0710 02:34:45.939113 1 cfg.go:49] Getting docker auth in paths : [/var/run/secrets/openshift.io/push] 2018-07-10T02:34:45.939830995Z I0710 02:34:45.939139 1 config.go:131] looking for config.json at /var/run/secrets/openshift.io/push/config.json 2018-07-10T02:34:45.939837146Z I0710 02:34:45.939188 1 config.go:101] looking for .dockercfg at /var/run/secrets/openshift.io/push/.dockercfg 2018-07-10T02:34:45.939843135Z I0710 02:34:45.939479 1 config.go:112] found .dockercfg at /var/run/secrets/openshift.io/push/.dockercfg 2018-07-10T02:34:45.939849251Z I0710 02:34:45.939526 1 cfg.go:62] Using serviceaccount user for Docker authentication for image docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest 2018-07-10T02:34:45.939855579Z I0710 02:34:45.939550 1 builder.go:240] Authenticating Docker push with user "serviceaccount" 2018-07-10T02:34:45.939861748Z 2018-07-10T02:34:45.939867124Z Pushing image docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test:latest ... 2018-07-10T02:34:46.040198918Z The push refers to repository [docker-registry.default.svc:5000/e2e-test-build-valuefrom-xmz29/test] 2018-07-10T02:34:46.057588214Z Preparing 2018-07-10T02:34:46.252185588Z Pushing [=> ] 33.79kB/1.093MB Pushing 2018-07-10T02:34:46.348626442Z Pushing [==================================================>] 1.201MB Pushing 2018-07-10T02:34:46.363004278Z Pushing [==================================================>] 1.293MB Pushing 2018-07-10T02:34:46.402952318Z Pushed 2018-07-10T02:34:46.530940021Z latest: digest: sha256:3ac406cd64bb7e42448796d24476728453cce0ac0c93a3acf6ff61a56e94018c size: 527 2018-07-10T02:34:46.556492273Z Push successful to contain substring : "FIELDREF_ENV" "mydockertest-1" /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:95 ------------------------------ [sig-storage] Projected should be consumable from pods in volume with mappings and Item mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:34:41.610: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:34:43.323: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-4xbdz STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume with mappings and Item mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name projected-configmap-test-volume-map-cdf1f8e7-83e9-11e8-8fe2-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:34:44.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-4xbdz" to be "success or failure" Jul 9 19:34:44.172: INFO: Pod "pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 36.30329ms Jul 9 19:34:46.213: INFO: Pod "pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076851235s Jul 9 19:34:48.259: INFO: Pod "pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123117065s Jul 9 19:34:50.295: INFO: Pod "pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15881733s STEP: Saw pod success Jul 9 19:34:50.295: INFO: Pod "pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:34:50.326: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:34:50.398: INFO: Waiting for pod pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276 to disappear Jul 9 19:34:50.428: INFO: Pod pod-projected-configmaps-cdf718ff-83e9-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:34:50.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4xbdz" for this suite. Jul 9 19:34:56.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:34:59.166: INFO: namespace: e2e-tests-projected-4xbdz, resource: bindings, ignored listing per whitelist Jul 9 19:35:00.487: INFO: namespace e2e-tests-projected-4xbdz deletion completed in 10.019999941s • [SLOW TEST:18.877 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume with mappings and Item mode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:34:54.134: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:34:55.791: INFO: configPath is now "/tmp/e2e-test-router-metrics-g46l6-user.kubeconfig" Jul 9 19:34:55.791: INFO: The user is now "e2e-test-router-metrics-g46l6-user" Jul 9 19:34:55.791: INFO: Creating project "e2e-test-router-metrics-g46l6" Jul 9 19:34:55.963: INFO: Waiting on permissions in project "e2e-test-router-metrics-g46l6" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:36 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:34:56.181: INFO: namespace : e2e-test-router-metrics-g46l6 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:02.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:76 S [SKIPPING] in Spec Setup (BeforeEach) [8.130 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:26 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:82 should expose a health check on the metrics port [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:83 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:39 ------------------------------ [Conformance][templates] templateservicebroker bind test should pass bind tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:107 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateservicebroker bind test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:34:56.924: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateservicebroker bind test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:34:58.540: INFO: configPath is now "/tmp/e2e-test-templates-kklzm-user.kubeconfig" Jul 9 19:34:58.540: INFO: The user is now "e2e-test-templates-kklzm-user" Jul 9 19:34:58.540: INFO: Creating project "e2e-test-templates-kklzm" Jul 9 19:34:58.714: INFO: Waiting on permissions in project "e2e-test-templates-kklzm" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:40 [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:91 Jul 9 19:34:58.787: INFO: Dumping pod state for namespace openshift-template-service-broker Jul 9 19:34:58.787: INFO: Running 'oc get --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=openshift-template-service-broker pods -o yaml' Jul 9 19:34:59.078: INFO: apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" selfLink: "" [AfterEach] [Conformance][templates] templateservicebroker bind test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:34:59.178: INFO: namespace : e2e-test-templates-kklzm api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateservicebroker bind test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:35:05.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure in Spec Setup (BeforeEach) [8.373 seconds] [Conformance][templates] templateservicebroker bind test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:25 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:39 should pass bind tests [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:107 Expected error: <*errors.StatusError | 0xc420866b40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "services \"apiserver\" not found", Reason: "NotFound", Details: {Name: "apiserver", Group: "", Kind: "services", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } services "apiserver" not found not to have occurred /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:46 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:00.493: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:35:02.351: INFO: configPath is now "/tmp/e2e-test-router-stress-h9q77-user.kubeconfig" Jul 9 19:35:02.351: INFO: The user is now "e2e-test-router-stress-h9q77-user" Jul 9 19:35:02.351: INFO: Creating project "e2e-test-router-stress-h9q77" Jul 9 19:35:02.592: INFO: Waiting on permissions in project "e2e-test-router-stress-h9q77" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:52 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:40 [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:35:02.729: INFO: namespace : e2e-test-router-stress-h9q77 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:08.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [8.318 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:30 The HAProxy router [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:86 converges when multiple routers are writing status [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:87 no router installed on the cluster /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/stress.go:57 ------------------------------ [Conformance][templates] templateinstance object kinds test should create and delete objects from varying API groups [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_objectkinds.go:28 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateinstance object kinds test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:02.264: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance object kinds test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:35:04.110: INFO: configPath is now "/tmp/e2e-test-templates-8gngx-user.kubeconfig" Jul 9 19:35:04.110: INFO: The user is now "e2e-test-templates-8gngx-user" Jul 9 19:35:04.110: INFO: Creating project "e2e-test-templates-8gngx" Jul 9 19:35:04.290: INFO: Waiting on permissions in project "e2e-test-templates-8gngx" ... [It] should create and delete objects from varying API groups [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_objectkinds.go:28 STEP: creating a template instance Jul 9 19:35:04.330: INFO: Running 'oc create --config=/tmp/e2e-test-templates-8gngx-user.kubeconfig --namespace=e2e-test-templates-8gngx -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/templates/templateinstance_objectkinds.yaml' secret "configsecret" created templateinstance.template.openshift.io "templateinstance" created STEP: deleting the template instance [AfterEach] [Conformance][templates] templateinstance object kinds test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:35:07.272: INFO: namespace : e2e-test-templates-8gngx api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance object kinds test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:13.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:11.079 seconds] [Conformance][templates] templateinstance object kinds test /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_objectkinds.go:20 should create and delete objects from varying API groups [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_objectkinds.go:28 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:05.298: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:35:06.739: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-464vf STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a docker exec liveness probe with timeout [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:35:07.328: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API [AfterEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:07.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-464vf" for this suite. Jul 9 19:35:13.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:35:15.679: INFO: namespace: e2e-tests-container-probe-464vf, resource: bindings, ignored listing per whitelist Jul 9 19:35:16.777: INFO: namespace e2e-tests-container-probe-464vf deletion completed in 9.407202671s S [SKIPPING] [11.479 seconds] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should be restarted with a docker exec liveness probe with timeout [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:35:07.328: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [sig-storage] Projected should provide container's cpu request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:08.812: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:35:10.534: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-8c52f STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should provide container's cpu request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:35:11.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de1ea6a4-83e9-11e8-8fe2-28d244b00276" in namespace "e2e-tests-projected-8c52f" to be "success or failure" Jul 9 19:35:11.333: INFO: Pod "downwardapi-volume-de1ea6a4-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 34.034913ms Jul 9 19:35:13.381: INFO: Pod "downwardapi-volume-de1ea6a4-83e9-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.081763231s STEP: Saw pod success Jul 9 19:35:13.381: INFO: Pod "downwardapi-volume-de1ea6a4-83e9-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:35:13.413: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-de1ea6a4-83e9-11e8-8fe2-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:35:13.493: INFO: Waiting for pod downwardapi-volume-de1ea6a4-83e9-11e8-8fe2-28d244b00276 to disappear Jul 9 19:35:13.524: INFO: Pod downwardapi-volume-de1ea6a4-83e9-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:13.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8c52f" for this suite. Jul 9 19:35:19.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:35:22.294: INFO: namespace: e2e-tests-projected-8c52f, resource: bindings, ignored listing per whitelist Jul 9 19:35:23.614: INFO: namespace e2e-tests-projected-8c52f deletion completed in 10.056025208s • [SLOW TEST:14.803 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should provide container's cpu request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Projected should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:436 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:13.347: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:35:14.884: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-wqrsd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:436 STEP: Creating configMap with name projected-configmap-test-volume-e0ae6537-83e9-11e8-8401-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:35:15.565: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e0b345ab-83e9-11e8-8401-28d244b00276" in namespace "e2e-tests-projected-wqrsd" to be "success or failure" Jul 9 19:35:15.598: INFO: Pod "pod-projected-configmaps-e0b345ab-83e9-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 33.286327ms Jul 9 19:35:17.680: INFO: Pod "pod-projected-configmaps-e0b345ab-83e9-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.11453399s STEP: Saw pod success Jul 9 19:35:17.680: INFO: Pod "pod-projected-configmaps-e0b345ab-83e9-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:35:17.709: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-projected-configmaps-e0b345ab-83e9-11e8-8401-28d244b00276 container projected-configmap-volume-test: STEP: delete the pod Jul 9 19:35:17.788: INFO: Waiting for pod pod-projected-configmaps-e0b345ab-83e9-11e8-8401-28d244b00276 to disappear Jul 9 19:35:17.816: INFO: Pod pod-projected-configmaps-e0b345ab-83e9-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:17.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wqrsd" for this suite. Jul 9 19:35:24.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:35:25.995: INFO: namespace: e2e-tests-projected-wqrsd, resource: bindings, ignored listing per whitelist Jul 9 19:35:27.612: INFO: namespace e2e-tests-projected-wqrsd deletion completed in 9.759686171s • [SLOW TEST:14.266 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:436 ------------------------------ [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:276 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:23.616: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:35:25.502: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-init-container-rd95f STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:40 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:276 STEP: creating the pod Jul 9 19:35:26.217: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:32.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rd95f" for this suite. Jul 9 19:35:38.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:35:42.398: INFO: namespace: e2e-tests-init-container-rd95f, resource: bindings, ignored listing per whitelist Jul 9 19:35:42.598: INFO: namespace e2e-tests-init-container-rd95f deletion completed in 9.908662462s • [SLOW TEST:18.982 seconds] [k8s.io] InitContainer /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:276 ------------------------------ SSSS ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:35:42.603: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:42.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:42.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [Area:Networking] network isolation /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should allow communication from default to non-default namespace on a different node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:45 Jul 9 19:35:42.603: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs won't deploy RC with unresolved images [Conformance] when patched with empty image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1461 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:34:54.251: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:34:56.588: INFO: configPath is now "/tmp/e2e-test-cli-deployment-47jcm-user.kubeconfig" Jul 9 19:34:56.588: INFO: The user is now "e2e-test-cli-deployment-47jcm-user" Jul 9 19:34:56.588: INFO: Creating project "e2e-test-cli-deployment-47jcm" Jul 9 19:34:56.689: INFO: Waiting on permissions in project "e2e-test-cli-deployment-47jcm" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] when patched with empty image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1461 STEP: creating DC STEP: tagging the busybox:latest as test:v1 image to create ImageStream Jul 9 19:34:56.800: INFO: Running 'oc tag --config=/tmp/e2e-test-cli-deployment-47jcm-user.kubeconfig --namespace=e2e-test-cli-deployment-47jcm docker.io/busybox:latest test:v1' Jul 9 19:34:57.127: INFO: Tag test:v1 set to docker.io/busybox:latest. STEP: waiting for deployment #1 to complete STEP: setting DC image repeatedly to empty string to fight with image trigger STEP: waiting to see if it won't deploy RC with invalid revision or the same one multiple times [AfterEach] won't deploy RC with unresolved images [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1457 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:35:08.216: INFO: namespace : e2e-test-cli-deployment-47jcm api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:46.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:52.058 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 won't deploy RC with unresolved images [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1454 when patched with empty image [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1461 ------------------------------ [Feature:Builds][pruning] prune builds based on settings in the buildconfig should prune builds after a buildConfig change [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:243 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:27.614: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:35:29.249: INFO: configPath is now "/tmp/e2e-test-build-pruning-857tj-user.kubeconfig" Jul 9 19:35:29.249: INFO: The user is now "e2e-test-build-pruning-857tj-user" Jul 9 19:35:29.249: INFO: Creating project "e2e-test-build-pruning-857tj" Jul 9 19:35:29.405: INFO: Waiting on permissions in project "e2e-test-build-pruning-857tj" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:37 Jul 9 19:35:29.524: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:41 STEP: waiting for builder service account STEP: waiting for openshift namespace imagestreams Jul 9 19:35:29.666: INFO: Running scan #0 Jul 9 19:35:29.666: INFO: Checking language ruby Jul 9 19:35:29.712: INFO: Checking tag 2.0 Jul 9 19:35:29.712: INFO: Checking tag 2.2 Jul 9 19:35:29.712: INFO: Checking tag 2.3 Jul 9 19:35:29.712: INFO: Checking tag 2.4 Jul 9 19:35:29.712: INFO: Checking tag 2.5 Jul 9 19:35:29.712: INFO: Checking tag latest Jul 9 19:35:29.712: INFO: Checking language nodejs Jul 9 19:35:29.778: INFO: Checking tag latest Jul 9 19:35:29.778: INFO: Checking tag 0.10 Jul 9 19:35:29.778: INFO: Checking tag 4 Jul 9 19:35:29.778: INFO: Checking tag 6 Jul 9 19:35:29.778: INFO: Checking tag 8 Jul 9 19:35:29.778: INFO: Checking language perl Jul 9 19:35:29.830: INFO: Checking tag latest Jul 9 19:35:29.830: INFO: Checking tag 5.16 Jul 9 19:35:29.830: INFO: Checking tag 5.20 Jul 9 19:35:29.830: INFO: Checking tag 5.24 Jul 9 19:35:29.830: INFO: Checking language php Jul 9 19:35:29.874: INFO: Checking tag 5.5 Jul 9 19:35:29.874: INFO: Checking tag 5.6 Jul 9 19:35:29.874: INFO: Checking tag 7.0 Jul 9 19:35:29.874: INFO: Checking tag 7.1 Jul 9 19:35:29.874: INFO: Checking tag latest Jul 9 19:35:29.874: INFO: Checking language python Jul 9 19:35:29.932: INFO: Checking tag 2.7 Jul 9 19:35:29.932: INFO: Checking tag 3.3 Jul 9 19:35:29.932: INFO: Checking tag 3.4 Jul 9 19:35:29.932: INFO: Checking tag 3.5 Jul 9 19:35:29.932: INFO: Checking tag 3.6 Jul 9 19:35:29.932: INFO: Checking tag latest Jul 9 19:35:29.932: INFO: Checking language wildfly Jul 9 19:35:29.975: INFO: Checking tag 10.0 Jul 9 19:35:29.975: INFO: Checking tag 10.1 Jul 9 19:35:29.975: INFO: Checking tag 11.0 Jul 9 19:35:29.975: INFO: Checking tag 12.0 Jul 9 19:35:29.975: INFO: Checking tag 8.1 Jul 9 19:35:29.975: INFO: Checking tag 9.0 Jul 9 19:35:29.975: INFO: Checking tag latest Jul 9 19:35:29.975: INFO: Checking language mysql Jul 9 19:35:30.018: INFO: Checking tag 5.5 Jul 9 19:35:30.018: INFO: Checking tag 5.6 Jul 9 19:35:30.018: INFO: Checking tag 5.7 Jul 9 19:35:30.018: INFO: Checking tag latest Jul 9 19:35:30.018: INFO: Checking language postgresql Jul 9 19:35:30.071: INFO: Checking tag 9.2 Jul 9 19:35:30.071: INFO: Checking tag 9.4 Jul 9 19:35:30.071: INFO: Checking tag 9.5 Jul 9 19:35:30.071: INFO: Checking tag 9.6 Jul 9 19:35:30.071: INFO: Checking tag latest Jul 9 19:35:30.071: INFO: Checking language mongodb Jul 9 19:35:30.131: INFO: Checking tag latest Jul 9 19:35:30.131: INFO: Checking tag 2.4 Jul 9 19:35:30.131: INFO: Checking tag 2.6 Jul 9 19:35:30.131: INFO: Checking tag 3.2 Jul 9 19:35:30.131: INFO: Checking tag 3.4 Jul 9 19:35:30.131: INFO: Checking language jenkins Jul 9 19:35:30.228: INFO: Checking tag 1 Jul 9 19:35:30.228: INFO: Checking tag 2 Jul 9 19:35:30.228: INFO: Checking tag latest Jul 9 19:35:30.228: INFO: Success! STEP: creating test image stream Jul 9 19:35:30.228: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-pruning/imagestream.yaml' imagestream.image.openshift.io "myphp" created [It] should prune builds after a buildConfig change [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:243 STEP: creating test failed build config Jul 9 19:35:30.561: INFO: Running 'oc create --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-pruning/failed-build-config.yaml' buildconfig.build.openshift.io "myphp" created STEP: patching the build config to leave 5 builds Jul 9 19:35:30.910: INFO: Running 'oc patch --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj bc/myphp -p {"spec":{"failedBuildsHistoryLimit": 5}}' buildconfig.build.openshift.io "myphp" patched STEP: starting and canceling three test builds Jul 9 19:35:31.202: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj myphp' Jul 9 19:35:31.587: INFO: start-build output with args [myphp]: Error> StdOut> build "myphp-1" started StdErr> Jul 9 19:35:31.587: INFO: Running 'oc cancel-build --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj myphp-1' build "myphp-1" cancelled Jul 9 19:35:32.930: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj myphp' Jul 9 19:35:33.280: INFO: start-build output with args [myphp]: Error> StdOut> build "myphp-2" started StdErr> Jul 9 19:35:33.280: INFO: Running 'oc cancel-build --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj myphp-2' build "myphp-2" cancelled Jul 9 19:35:34.654: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj myphp' Jul 9 19:35:34.973: INFO: start-build output with args [myphp]: Error> StdOut> build "myphp-3" started StdErr> Jul 9 19:35:34.973: INFO: Running 'oc cancel-build --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj myphp-3' build "myphp-3" cancelled STEP: patching the build config to leave 1 build Jul 9 19:35:36.391: INFO: Running 'oc patch --config=/tmp/e2e-test-build-pruning-857tj-user.kubeconfig --namespace=e2e-test-build-pruning-857tj bc/myphp -p {"spec":{"failedBuildsHistoryLimit": 1}}' buildconfig.build.openshift.io "myphp" patched STEP: waiting up to one minute for pruning to complete 1 builds exist, retrying...[AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:56 [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:35:37.926: INFO: namespace : e2e-test-build-pruning-857tj api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:59.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:32.391 seconds] [Feature:Builds][pruning] prune builds based on settings in the buildconfig /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:21 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:35 should prune builds after a buildConfig change [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:243 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:46.310: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:35:48.448: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-vps79 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name configmap-test-volume-map-f4c67dc1-83e9-11e8-881a-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:35:49.300: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4ccd8cb-83e9-11e8-881a-28d244b00276" in namespace "e2e-tests-configmap-vps79" to be "success or failure" Jul 9 19:35:49.341: INFO: Pod "pod-configmaps-f4ccd8cb-83e9-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 40.16465ms Jul 9 19:35:51.383: INFO: Pod "pod-configmaps-f4ccd8cb-83e9-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.082277708s STEP: Saw pod success Jul 9 19:35:51.383: INFO: Pod "pod-configmaps-f4ccd8cb-83e9-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:35:51.430: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-f4ccd8cb-83e9-11e8-881a-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:35:51.532: INFO: Waiting for pod pod-configmaps-f4ccd8cb-83e9-11e8-881a-28d244b00276 to disappear Jul 9 19:35:51.583: INFO: Pod pod-configmaps-f4ccd8cb-83e9-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:51.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vps79" for this suite. Jul 9 19:35:57.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:36:02.276: INFO: namespace: e2e-tests-configmap-vps79, resource: bindings, ignored listing per whitelist Jul 9 19:36:02.526: INFO: namespace e2e-tests-configmap-vps79 deletion completed in 10.898293413s • [SLOW TEST:16.216 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:31:50.355: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:31:52.244: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-422gn STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-422gn Jul 9 19:31:59.353: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-422gn STEP: checking the pod's current state and verifying that restartCount is present Jul 9 19:31:59.420: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:36:01.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-422gn" for this suite. Jul 9 19:36:07.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:36:11.301: INFO: namespace: e2e-tests-container-probe-422gn, resource: bindings, ignored listing per whitelist Jul 9 19:36:11.689: INFO: namespace e2e-tests-container-probe-422gn deletion completed in 10.635277568s • [SLOW TEST:261.334 seconds] [k8s.io] Probing container /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should *not* be restarted with a /healthz http liveness probe [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [k8s.io] Pods should contain environment variables for services [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:42.606: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:35:44.262: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-ngb4k STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should contain environment variables for services [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:35:47.152: INFO: Waiting up to 5m0s for pod "client-envvars-f385d8a0-83e9-11e8-8fe2-28d244b00276" in namespace "e2e-tests-pods-ngb4k" to be "success or failure" Jul 9 19:35:47.185: INFO: Pod "client-envvars-f385d8a0-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.883576ms Jul 9 19:35:49.220: INFO: Pod "client-envvars-f385d8a0-83e9-11e8-8fe2-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068124556s Jul 9 19:35:51.270: INFO: Pod "client-envvars-f385d8a0-83e9-11e8-8fe2-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117779155s STEP: Saw pod success Jul 9 19:35:51.270: INFO: Pod "client-envvars-f385d8a0-83e9-11e8-8fe2-28d244b00276" satisfied condition "success or failure" Jul 9 19:35:51.301: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod client-envvars-f385d8a0-83e9-11e8-8fe2-28d244b00276 container env3cont: STEP: delete the pod Jul 9 19:35:51.381: INFO: Waiting for pod client-envvars-f385d8a0-83e9-11e8-8fe2-28d244b00276 to disappear Jul 9 19:35:51.416: INFO: Pod client-envvars-f385d8a0-83e9-11e8-8fe2-28d244b00276 no longer exists [AfterEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:35:51.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ngb4k" for this suite. Jul 9 19:36:13.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:36:15.223: INFO: namespace: e2e-tests-pods-ngb4k, resource: bindings, ignored listing per whitelist Jul 9 19:36:17.594: INFO: namespace e2e-tests-pods-ngb4k deletion completed in 26.135196978s • [SLOW TEST:34.989 seconds] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should contain environment variables for services [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:419 Jul 9 19:36:17.596: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:36:17.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:36:17.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [Area:Networking] network isolation /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that isolates namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:418 should prevent communication between pods in different namespaces on the same node [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:28 Jul 9 19:36:17.596: This plugin does not isolate namespaces by default. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:101 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:36:02.528: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:36:04.808: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-sysctl-ppq4n STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:56 [It] should support unsafe sysctls which are actually whitelisted [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:101 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Collecting events from namespace "e2e-tests-sysctl-ppq4n". STEP: Found 5 events. Jul 9 19:36:09.885: INFO: At 2018-07-09 19:36:05 -0700 PDT - event for sysctl-fe8dca87-83e9-11e8-881a-28d244b00276: {default-scheduler } Scheduled: Successfully assigned e2e-tests-sysctl-ppq4n/sysctl-fe8dca87-83e9-11e8-881a-28d244b00276 to ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:36:09.885: INFO: At 2018-07-09 19:36:06 -0700 PDT - event for sysctl-fe8dca87-83e9-11e8-881a-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulling: pulling image "busybox" Jul 9 19:36:09.885: INFO: At 2018-07-09 19:36:07 -0700 PDT - event for sysctl-fe8dca87-83e9-11e8-881a-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Pulled: Successfully pulled image "busybox" Jul 9 19:36:09.885: INFO: At 2018-07-09 19:36:07 -0700 PDT - event for sysctl-fe8dca87-83e9-11e8-881a-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Created: Created container Jul 9 19:36:09.885: INFO: At 2018-07-09 19:36:07 -0700 PDT - event for sysctl-fe8dca87-83e9-11e8-881a-28d244b00276: {kubelet ip-10-0-130-54.us-west-2.compute.internal} Started: Started container Jul 9 19:36:10.053: INFO: POD NODE PHASE GRACE CONDITIONS Jul 9 19:36:10.053: INFO: registry-6559c8c4db-45526 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: test-1-build ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:36:07 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:36:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:36:04 -0700 PDT }] Jul 9 19:36:10.053: INFO: pod-projected-secrets-e2d02d65-83e9-11e8-bd2e-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:35:19 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:35:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:35:19 -0700 PDT }] Jul 9 19:36:10.053: INFO: sysctl-fe8dca87-83e9-11e8-881a-28d244b00276 ip-10-0-130-54.us-west-2.compute.internal Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:36:05 -0700 PDT PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:36:05 -0700 PDT PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 19:36:05 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-apiserver-cn2ps ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:45 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-controller-manager-558dc6fb98-q6vr5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:34 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-core-operator-75d546fbbb-c7ctx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:20 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:11 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-dns-787c975867-txmxv ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:22 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-flannel-bgv4g ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:59 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-flannel-m5wph ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:58 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:39 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-flannel-xcck7 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:17 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-proxy-5td7p ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:54 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-proxy-l2cnn ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:22 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-proxy-zsgcb ip-10-0-141-201.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:53 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-scheduler-68f8875b5c-s5tdr ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:36:10.053: INFO: metrics-server-5767bfc576-gfbwb ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: openshift-apiserver-rkms5 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:19 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: openshift-controller-manager-99d6586b-qq685 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:17:55 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:36:10.053: INFO: pod-checkpointer-4882g ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:03 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:36:10.053: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:08 -0700 PDT }] Jul 9 19:36:10.053: INFO: prometheus-0 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:40 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:50:04 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-network-operator-jwwmp ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:15:13 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:14:24 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-node-controller-2ctqd ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:08 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:05 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:14 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:16:08 -0700 PDT }] Jul 9 19:36:10.053: INFO: webconsole-6698d4fbbc-rgsw2 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:44 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: default-http-backend-6985d557bb-8h44n ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:38 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: router-6796c95fdf-2k4wk ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:37 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:46 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:36:10.053: INFO: directory-sync-d84d84d9f-j7pr6 ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:34:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:33:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: kube-addon-operator-675f99d7f8-c6pdt ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:29 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-alm-operator-79b6996f74-prs9h ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:35 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-channel-operator-5d878cd785-l66n4 ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:12 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-clu-6b8d87785f-fswbx ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:11 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:06 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-node-agent-r77mj ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:37:33 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:20 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-node-agent-rrwlg ip-10-0-130-54.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 18:12:57 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:32:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-stats-emitter-d87f669fd-988nl ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:29 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:36 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:19:23 -0700 PDT }] Jul 9 19:36:10.053: INFO: tectonic-utility-operator-786b69fc8b-4xffz ip-10-0-35-213.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:41 -0700 PDT } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-09 13:18:13 -0700 PDT }] Jul 9 19:36:10.053: INFO: Jul 9 19:36:10.099: INFO: Logging node info for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:36:10.143: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-130-54.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-130-54.us-west-2.compute.internal,UID:2f71bed0-83b7-11e8-84c6-0af96768d57e,ResourceVersion:93064,Generation:0,CreationTimestamp:2018-07-09 13:32:23 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-130-54,node-role.kubernetes.io/worker: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:08:91:8f:b9:a5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.130.54,node-configuration.v1.coreos.com/currentConfig: worker-2650561509,node-configuration.v1.coreos.com/desiredConfig: worker-2650561509,node-configuration.v1.coreos.com/targetConfig: worker-2650561509,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.2.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-0cb9cec2620663d39,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365150208 0} {} 8169092Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260292608 0} {} 8066692Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:36:10 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:36:10 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:36:10 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:36:10 -0700 PDT 2018-07-09 13:32:23 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:36:10 -0700 PDT 2018-07-09 13:33:23 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.130.54} {InternalDNS ip-10-0-130-54.us-west-2.compute.internal} {Hostname ip-10-0-130-54}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC283016-6CE7-ACE7-0F9A-02CE10505945,BootID:cfad64a2-03d7-403a-bd51-76866880a650,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[openshift/origin-haproxy-router@sha256:f0a71ada9e9ee48529540c2d4938b9caa55f9a0ac8a3be598e269ca5cebf70c0 openshift/origin-haproxy-router:v3.10.0-alpha.0] 1284960820} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[docker-registry.default.svc:5000/e2e-test-templates-tjfzt/cakephp-mysql-example@sha256:0cacfdfd78bae6f7c7b9cb4e2477d9925f2afd71b1011ee19aa31860d8ea3b1b docker-registry.default.svc:5000/e2e-test-templates-tjfzt/cakephp-mysql-example:latest] 626474696} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test@sha256:ee11e7c7dbb2d609aaa42c8806ef1bf5663df95dd925e6ab424b4439dbaf75fd docker-registry.default.svc:5000/e2e-test-build-valuefrom-frlkw/test:latest] 613134548} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test@sha256:6daa01a6f7f0784905bf9dcbce49826d73d7c3c1d62a802f875ee7c10db02960 docker-registry.default.svc:5000/e2e-test-build-valuefrom-lxhjr/test:latest] 613134454} {[docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test@sha256:92c5e723d97318711a71afb9ee5c12c3c48b98d0f2aaa5e954095fabbcb505ee docker-registry.default.svc:5000/e2e-test-build-valuefrom-vmtzs/test:latest] 613133841} {[docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example@sha256:02d80c750d1e71afc7792f55f935c3dd6cde1788bee2b53ab554d29c903ca064 docker-registry.default.svc:5000/e2e-test-templates-l8j5q/cakephp-mysql-example:latest] 603384691} {[docker-registry.default.svc:5000/openshift/php@sha256:25008f054dbb8eb5d831870b9a1e3e22e47c6b6def14f3f743b997b8f0cd4d52] 590344505} {[docker-registry.default.svc:5000/openshift/php@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7@sha256:164fa6fc06cdd98c98be09ecff5398feef3d479652899f983cf41e644d604ff0 centos/php-70-centos7:latest] 589408618} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass@sha256:880359284c1e0933fe5f2db29b8c4d948b70da3dfb26a0462f68b23397740b0a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejspass:latest] 568094192} {[docker-registry.default.svc:5000/openshift/php@sha256:59c3d53372cd7097494187f5a58bab58a1d956a340b70a23c84a0d000a565cbe] 567254500} {[docker-registry.default.svc:5000/e2e-test-build-timing-vrrgw/test@sha256:ba5ac890dc23c5a371318b84c1d65438289d0da6142ba173c5190e494723f103 docker-registry.default.svc:5000/e2e-test-build-timing-vrrgw/test:latest] 566117187} {[docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test@sha256:539e80a4de02794f6126cffce75562bcb721041c6d443c5ced15ba286d70e229 docker-registry.default.svc:5000/e2e-test-build-timing-bl897/test:latest] 566117187} {[docker-registry.default.svc:5000/openshift/ruby@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7@sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae centos/ruby-22-centos7:latest] 566117040} {[docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test@sha256:e0eeef684e9de55219871fa9e360d73a1163cfc407c626eade862cbee5a9bbc5 docker-registry.default.svc:5000/e2e-test-build-timing-wkh4w/test:latest] 566117040} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-8sn6t/nodejsroot@sha256:3ccbe261a94f372092d78d727ca36125c8fb0964d64c475c351056098b49e749 docker-registry.default.svc:5000/e2e-test-s2i-build-root-8sn6t/nodejsroot:latest] 560696751} {[centos/nodejs-6-centos7@sha256:b2867b5008d9e975b3d4710ec0f31cdc96b079b83334b17e03a60602a7a590fc] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot@sha256:4084131a9910c10780186608faf5a9643de0f18d09c27fe828499a8d180abfba docker-registry.default.svc:5000/e2e-test-s2i-build-root-phq84/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot@sha256:0397f7e12d87d62c539356a4936348d0a8deb40e1b5e970cdd1744d3e6ffa05a docker-registry.default.svc:5000/e2e-test-s2i-build-root-4l2gq/nodejsroot:latest] 560696751} {[docker-registry.default.svc:5000/openshift/ruby@sha256:2e83b9e07e85960060096b6aff7ee202a5f52e0e18447641b080b1f3879e0901] 536571487} {[docker-registry.default.svc:5000/openshift/ruby@sha256:8f00b7a5789887b72db0415355830c87e18804b774a922a424736f5237a44933] 518934530} {[docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678@sha256:a9ecb5931f283c598dcaf3aca9025599eb71115bd0f2cd0f1989a9f37394efad docker-registry.default.svc:5000/e2e-test-new-app-xn8nh/a234567890123456789012345678901234567890123456789012345678:latest] 511744495} {[docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample@sha256:95a78c60dc1709c2212cd8cc48cd3fffe6cdcdd847674497d9aa5d7891551699 docker-registry.default.svc:5000/e2e-test-prometheus-4fqst/origin-nodejs-sample:latest] 511744370} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:3bb2aed7578ab5b6ba2bf22993df3c73ef91bdb02e273cc0ce8e529de7ee5660] 506453985} {[docker-registry.default.svc:5000/openshift/ruby@sha256:0eaaed9fae1b0d9bc8ed73b93d581c6ab019a92277484c9acf52fa60b3269a7c] 504578679} {[docker-registry.default.svc:5000/openshift/nodejs@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653 centos/nodejs-8-centos7@sha256:3f262ed6ed7ec7497576e1117ddfafacaa23fecf34693e05a0abcb652b93e653] 504452018} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:896482969cd659b419bc444c153a74d11820655c7ed19b5094b8eb041f0065d6] 487132847} {[docker-registry.default.svc:5000/openshift/mongodb@sha256:91955c14f978a0f48918eecc8b3772faf1615e943daccf9bb051a51cba30422f] 465041680} {[docker-registry.default.svc:5000/openshift/mysql@sha256:a1eea9711a098b649566976647025ad32c728b00aada4d19cf274602e652db10] 448191973} {[openshift/origin-docker-builder@sha256:4fe8032f87d2f8485a711ec60a9ffb330e42a6cd8d232ad3cf63c42471cfab29 openshift/origin-docker-builder:latest] 447580928} {[docker-registry.default.svc:5000/openshift/mysql@sha256:d03537ef57d51b13e6ad4a73a382ca180a0e02d975c8237790410f45865aae3c] 429435940} {[openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 openshift/origin-haproxy-router:latest] 394965919} {[openshift/origin-deployer@sha256:1295e5be56fc03d4c482194378a882f2e96a8d23eadaf6dd32d603d3e877df99 openshift/origin-deployer:latest] 371674595} {[openshift/origin-web-console@sha256:d2cbbb533d26996226add8cb327cb2060e7a03c6aa96ad94cd236d4064c094ce openshift/origin-web-console:latest] 336636057} {[openshift/prometheus@sha256:35e2e0efc874c055be60a025874256816c98b9cebc10f259d7fb806bbe68badf openshift/prometheus:v2.2.1] 317896379} {[openshift/origin-docker-registry@sha256:c40ebb707721327c3b9c79f0e8e7f02483f034355d4149479333cc134b72967c openshift/origin-docker-registry:latest] 302637209} {[openshift/origin-pod@sha256:8fbd41f21824f5981716568790c5f78a4710bb0709ce9c473eb21ad2fbc5e877 openshift/origin-pod:latest] 251747200} {[openshift/origin-base@sha256:43dd97db435025eee02606658cfcccbc0a8ac4135e0d8870e91930d6cab8d1fd openshift/origin-base:latest] 228695137} {[openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a openshift/oauth-proxy:v1.0.0] 228241928} {[openshift/prometheus-alertmanager@sha256:35443abf6c5cf99b080307fe0f98098334f299780537a3e61ac5604cbfe48f7e openshift/prometheus-alertmanager:v0.14.0] 221857684} {[openshift/prometheus-alert-buffer@sha256:076f8dd576806f5c2dde7e536d020c31aa7d2ec7dcea52da6cbb944895def7ba openshift/prometheus-alert-buffer:v0.0.2] 200521084} {[docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage@sha256:df3e69e3fe1bc86897717b020b6caa000f1f97c14dc0b3853ca0d7149412da54 docker-registry.default.svc:5000/e2e-test-build-multistage-9cdjf/multi-stage:v1] 199835207} {[docker-registry.default.svc:5000/e2e-test-build-multistage-6nv4h/multi-stage@sha256:fae21b55071abd175d4207707eccd5b5aedf3e20e34714cba2ccfacfd394587a docker-registry.default.svc:5000/e2e-test-build-multistage-6nv4h/multi-stage:v1] 199835207} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1@sha256:3967cd8851952bbba0b3a4d9c038f36dc5001463c8521d6955ab0f3f4598d779 docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-s4b2n/image1:latest] 199678471} {[centos@sha256:b67d21dfe609ddacf404589e04631d90a342921e81c40aeaf3391f6717fa5322 centos@sha256:eed5b251b615d1e70b10bcec578d64e8aa839d2785c2ffd5424e472818c42755 centos:7 centos:centos7] 199678471} {[docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-r2mt4/image1@sha256:16dcb9524a1c672adffa862499aabbb0d97d5c996120b2934c1ab382355ec4ea docker-registry.default.svc:5000/e2e-test-docker-build-pullsecret-r2mt4/image1:latest] 199678471} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google_containers/metrics-server-amd64@sha256:54d2cf293e01f72d9be0e7c4f2c98e31f599088a9426a6415fe62426d446f5b2 gcr.io/google_containers/metrics-server-amd64:v0.2.0] 96501893} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:36:10.143: INFO: Logging kubelet events for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:36:10.183: INFO: Logging pods the kubelet thinks is on node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:36:10.288: INFO: default-http-backend-6985d557bb-8h44n started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Container default-http-backend ready: true, restart count 0 Jul 9 19:36:10.288: INFO: router-6796c95fdf-2k4wk started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Container router ready: true, restart count 0 Jul 9 19:36:10.288: INFO: pod-projected-secrets-e2d02d65-83e9-11e8-bd2e-28d244b00276 started at 2018-07-09 19:35:19 -0700 PDT (0+3 container statuses recorded) Jul 9 19:36:10.288: INFO: Container creates-volume-test ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container dels-volume-test ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container upds-volume-test ready: true, restart count 0 Jul 9 19:36:10.288: INFO: kube-proxy-5td7p started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:36:10.288: INFO: registry-6559c8c4db-45526 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Container registry ready: true, restart count 0 Jul 9 19:36:10.288: INFO: test-1-build started at 2018-07-09 19:36:04 -0700 PDT (2+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Init container git-clone ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Init container manage-dockerfile ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container docker-build ready: true, restart count 0 Jul 9 19:36:10.288: INFO: prometheus-0 started at 2018-07-09 13:50:04 -0700 PDT (0+6 container statuses recorded) Jul 9 19:36:10.288: INFO: Container alert-buffer ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container alertmanager ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container alertmanager-proxy ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container alerts-proxy ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container prom-proxy ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container prometheus ready: true, restart count 0 Jul 9 19:36:10.288: INFO: kube-flannel-xcck7 started at 2018-07-09 13:32:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:36:10.288: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:36:10.288: INFO: metrics-server-5767bfc576-gfbwb started at 2018-07-09 13:33:23 -0700 PDT (0+2 container statuses recorded) Jul 9 19:36:10.288: INFO: Container metrics-server ready: true, restart count 0 Jul 9 19:36:10.288: INFO: Container metrics-server-nanny ready: true, restart count 0 Jul 9 19:36:10.288: INFO: tectonic-node-agent-rrwlg started at 2018-07-09 13:32:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Container node-agent ready: true, restart count 3 Jul 9 19:36:10.288: INFO: directory-sync-d84d84d9f-j7pr6 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Container directory-sync ready: true, restart count 0 Jul 9 19:36:10.288: INFO: webconsole-6698d4fbbc-rgsw2 started at 2018-07-09 13:33:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Container webconsole ready: true, restart count 0 Jul 9 19:36:10.288: INFO: sysctl-fe8dca87-83e9-11e8-881a-28d244b00276 started at 2018-07-09 19:36:05 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:10.288: INFO: Container test-container ready: false, restart count 0 W0709 19:36:10.333364 11716 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:36:10.503: INFO: Latency metrics for node ip-10-0-130-54.us-west-2.compute.internal Jul 9 19:36:10.503: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.109061s} Jul 9 19:36:10.503: INFO: Logging node info for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:36:10.584: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-141-201.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-141-201.us-west-2.compute.internal,UID:ab76db34-83b4-11e8-8888-0af96768d57e,ResourceVersion:92950,Generation:0,CreationTimestamp:2018-07-09 13:14:22 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2b,kubernetes.io/hostname: ip-10-0-141-201,node-role.kubernetes.io/etcd: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b6:11:a8:d0:6d:85"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.141.201,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.1.0/24,ExternalID:,ProviderID:aws:///us-west-2b/i-03457d640f9c71dd1,Unschedulable:false,Taints:[{node-role.kubernetes.io/etcd NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:36:02 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:36:02 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:36:02 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:36:02 -0700 PDT 2018-07-09 13:14:22 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:36:02 -0700 PDT 2018-07-09 13:16:04 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.141.201} {InternalDNS ip-10-0-141-201.us-west-2.compute.internal} {Hostname ip-10-0-141-201}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2F6BCA-4D59-F6AA-8C7B-027F94D52D78,BootID:92773d40-1311-4ad5-b294-38db65faf16c,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/kube-client-agent@sha256:8564ab65bcb1064006d2fc9c6e32a5ca3f4326cdd2da9a2efc4fb7cc0e0b6041 quay.io/coreos/kube-client-agent:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 33236131} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:36:10.584: INFO: Logging kubelet events for node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:36:10.634: INFO: Logging pods the kubelet thinks is on node ip-10-0-141-201.us-west-2.compute.internal Jul 9 19:36:40.697: INFO: Unable to retrieve kubelet pods for node ip-10-0-141-201.us-west-2.compute.internal: the server is currently unable to handle the request (get nodes ip-10-0-141-201.us-west-2.compute.internal:10250) Jul 9 19:36:40.697: INFO: Logging node info for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:36:40.760: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-10-0-35-213.us-west-2.compute.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-10-0-35-213.us-west-2.compute.internal,UID:a83cf873-83b4-11e8-8888-0af96768d57e,ResourceVersion:93261,Generation:0,CreationTimestamp:2018-07-09 13:14:17 -0700 PDT,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: m4.large,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-west-2,failure-domain.beta.kubernetes.io/zone: us-west-2c,kubernetes.io/hostname: ip-10-0-35-213,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:08:be:54:0d:9f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 10.0.35.213,node-configuration.v1.coreos.com/currentConfig: master-2063737633,node-configuration.v1.coreos.com/desiredConfig: master-2063737633,node-configuration.v1.coreos.com/targetConfig: master-2063737633,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.2.0.0/24,ExternalID:,ProviderID:aws:///us-west-2c/i-0e1d36783c9705b28,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8365146112 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8260288512 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-07-09 19:36:32 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-07-09 19:36:32 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-07-09 19:36:32 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-07-09 19:36:32 -0700 PDT 2018-07-09 13:14:17 -0700 PDT KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-07-09 19:36:32 -0700 PDT 2018-07-09 13:16:08 -0700 PDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.0.35.213} {ExternalIP 34.220.249.237} {InternalDNS ip-10-0-35-213.us-west-2.compute.internal} {ExternalDNS ec2-34-220-249-237.us-west-2.compute.amazonaws.com} {Hostname ip-10-0-35-213}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2ED297-E036-AA0D-C4ED-9057B3EA9001,BootID:7f784e0b-09a6-495a-b787-3d8619214f8a,KernelVersion:4.14.48-coreos-r2,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://18.3.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-node@sha256:e6d4cf7a40809bd51834316e10248ea33bb523573c25a5740e1ff34388f29f70 openshift/origin-node:latest] 1302551512} {[quay.io/coreos/hyperkube@sha256:b5eb7d69ca52899d0cbc395a68129b6ca35d1f632a189d7869b25968e2065a19 quay.io/coreos/hyperkube:v1.9.3_coreos.0] 652363757} {[openshift/origin-hypershift@sha256:3b26011ae771a6036a7533d970052be5c04bc1f6e6812314ffefd902f40910fd openshift/origin-hypershift:latest] 518022163} {[openshift/origin-hyperkube@sha256:11a08060b48d226d64d4bb5234f2386bf22472a0835c5b91f0fb0db25b0a7e19 openshift/origin-hyperkube:latest] 498702039} {[quay.io/coreos/awscli@sha256:1d6ea2f37c248a4f4f2a70126f0b8555fd0804d4e65af3b30c3a949247ea13a6 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600] 97521631} {[quay.io/coreos/bootkube@sha256:63afddd30deedff273d65607f4fcf0b331f4418838a00c69b6ab7a5754a24f5a quay.io/coreos/bootkube:v0.10.0] 84921995} {[quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8] 50456751} {[quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170 quay.io/coreos/flannel-cni:v0.3.0] 49786179} {[quay.io/coreos/tectonic-stats@sha256:e800fe60dd1a0f89f8ae85caae9209201254e17d889d664d633ed08e274e2a39 quay.io/coreos/tectonic-stats:6e882361357fe4b773adbf279cddf48cb50164c1] 48779830} {[quay.io/coreos/pod-checkpointer@sha256:1e1e48228f872d56c8a57a5e12adb5239ae9e6206536baf2904e4bf03314c8e8 quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558] 47992230} {[quay.io/coreos/tectonic-network-operator-dev@sha256:e29d797f5740cf6f5c0ccc0de2b3e606d187acbdc0bb79a4397c058d8840c8fe quay.io/coreos/tectonic-network-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44068170} {[quay.io/coreos/tectonic-node-controller-operator-dev@sha256:7a31568c6c2e398cffa7e8387cf51543e3bf1f01b4a050a5d00a9b593c3dace0 quay.io/coreos/tectonic-node-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44053165} {[quay.io/coreos/kube-addon-operator-dev@sha256:e327727a93813c31f6d65f76f2998722754b8ccb5110949153e55f2adbc2374e quay.io/coreos/kube-addon-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 44052211} {[quay.io/coreos/tectonic-utility-operator-dev@sha256:4fb4de52c7aa64ce124e1bf73fb27989356c414101ecc19ca4ec9ab80e00a88d quay.io/coreos/tectonic-utility-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43818409} {[quay.io/coreos/tectonic-ingress-controller-operator-dev@sha256:5e96253c8fe8357473d4806b116fcf03fe18dcad466a88083f9b9310045821f1 quay.io/coreos/tectonic-ingress-controller-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 43808038} {[quay.io/coreos/tectonic-alm-operator@sha256:ce32e6d4745040be8807d09eb925b2b076b60fb0a93e33302b74a5cc8f294ca5 quay.io/coreos/tectonic-alm-operator:v0.3.1] 43202998} {[gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8] 42210862} {[gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8] 40951779} {[quay.io/coreos/kube-core-operator-dev@sha256:6cc0dd2405f19014b41a0eed57c39160aeb92c2380ac8f8a067ce7dee476cba2 quay.io/coreos/kube-core-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40849618} {[quay.io/coreos/tectonic-channel-operator-dev@sha256:6eeb84c385333755a2189c199587bc26db6c5d897e1962d7e1047dec2531e85e quay.io/coreos/tectonic-channel-operator-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 40523592} {[quay.io/coreos/etcd@sha256:688e6c102955fe927c34db97e6352d0e0962554735b2db5f2f66f3f94cfe8fd1 quay.io/coreos/etcd:v3.2.14] 37228348} {[quay.io/coreos/tectonic-node-agent@sha256:1eb929223a1ecc12b246cbe4ba366092b139067aadb809fac8189b8ca274fab4 quay.io/coreos/tectonic-node-agent:92fb13f8b2a46702b8c10f278408f37d825ee2cb] 36791814} {[quay.io/coreos/kube-core-renderer-dev@sha256:a595dfe57b7992971563fcea8ac1858c306529a465f9b690911f4220d93d3c5c quay.io/coreos/kube-core-renderer-dev:c3cee2bc5673011e88ac7b0ab1659c2c7243a499] 36535818} {[quay.io/coreos/kube-etcd-signer-server@sha256:c4c0becf6779523af5b644b53375d61bed9c4688d496cb2f88d4f08024ac5390 quay.io/coreos/kube-etcd-signer-server:678cc8e6841e2121ebfdb6e2db568fce290b67d6] 34655544} {[quay.io/coreos/tectonic-node-controller-dev@sha256:c9c17f7c4c738e519e36224ae8c71d3a881b92ffb86fdb75f358efebafa27d84 quay.io/coreos/tectonic-node-controller-dev:a437848532713f2fa4137e9a0f4f6a689cf554a8] 25570332} {[quay.io/coreos/tectonic-clu@sha256:4e6a907a433e741632c8f9a7d9d9009bc08ac494dce05e0a19f8fa0a440a3926 quay.io/coreos/tectonic-clu:v0.0.1] 5081911} {[quay.io/coreos/tectonic-stats-extender@sha256:6e7fe41ca2d63791c08d2cc4b4311d9e01b37fa3dc116d3e77e7306cbe29a0f1 quay.io/coreos/tectonic-stats-extender:487b3da4e175da96dabfb44fba65cdb8b823db2e] 2818916} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],},} Jul 9 19:36:40.761: INFO: Logging kubelet events for node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:36:40.807: INFO: Logging pods the kubelet thinks is on node ip-10-0-35-213.us-west-2.compute.internal Jul 9 19:36:40.955: INFO: tectonic-channel-operator-5d878cd785-l66n4 started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container tectonic-channel-operator ready: true, restart count 0 Jul 9 19:36:40.955: INFO: kube-proxy-l2cnn started at 2018-07-09 13:14:22 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container kube-proxy ready: true, restart count 0 Jul 9 19:36:40.955: INFO: openshift-apiserver-rkms5 started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container openshift-apiserver ready: true, restart count 0 Jul 9 19:36:40.955: INFO: tectonic-network-operator-jwwmp started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container tectonic-network-operator ready: true, restart count 0 Jul 9 19:36:40.955: INFO: kube-dns-787c975867-txmxv started at 2018-07-09 13:16:08 -0700 PDT (0+3 container statuses recorded) Jul 9 19:36:40.955: INFO: Container dnsmasq ready: true, restart count 0 Jul 9 19:36:40.955: INFO: Container kubedns ready: true, restart count 0 Jul 9 19:36:40.955: INFO: Container sidecar ready: true, restart count 0 Jul 9 19:36:40.955: INFO: kube-scheduler-68f8875b5c-s5tdr started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container kube-scheduler ready: true, restart count 0 Jul 9 19:36:40.955: INFO: tectonic-clu-6b8d87785f-fswbx started at 2018-07-09 13:19:06 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container tectonic-clu ready: true, restart count 0 Jul 9 19:36:40.955: INFO: tectonic-stats-emitter-d87f669fd-988nl started at 2018-07-09 13:19:23 -0700 PDT (1+2 container statuses recorded) Jul 9 19:36:40.955: INFO: Init container tectonic-stats-extender-init ready: true, restart count 0 Jul 9 19:36:40.955: INFO: Container tectonic-stats-emitter ready: true, restart count 0 Jul 9 19:36:40.955: INFO: Container tectonic-stats-extender ready: true, restart count 0 Jul 9 19:36:40.955: INFO: kube-apiserver-cn2ps started at 2018-07-09 13:14:23 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container kube-apiserver ready: true, restart count 4 Jul 9 19:36:40.955: INFO: tectonic-node-controller-2ctqd started at 2018-07-09 13:18:05 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container tectonic-node-controller ready: true, restart count 0 Jul 9 19:36:40.955: INFO: tectonic-alm-operator-79b6996f74-prs9h started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container tectonic-alm-operator ready: true, restart count 0 Jul 9 19:36:40.955: INFO: tectonic-ingress-controller-operator-78c4d6b9fc-bknh9 started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container tectonic-ingress-controller-operator ready: true, restart count 0 Jul 9 19:36:40.955: INFO: tectonic-node-agent-r77mj started at 2018-07-09 13:19:20 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container node-agent ready: true, restart count 4 Jul 9 19:36:40.955: INFO: pod-checkpointer-4882g started at 2018-07-09 13:14:24 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container pod-checkpointer ready: true, restart count 0 Jul 9 19:36:40.955: INFO: kube-flannel-m5wph started at 2018-07-09 13:15:39 -0700 PDT (0+2 container statuses recorded) Jul 9 19:36:40.955: INFO: Container install-cni ready: true, restart count 0 Jul 9 19:36:40.955: INFO: Container kube-flannel ready: true, restart count 0 Jul 9 19:36:40.955: INFO: openshift-controller-manager-99d6586b-qq685 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container openshift-controller-manager ready: true, restart count 3 Jul 9 19:36:40.955: INFO: tectonic-node-controller-operator-648b9f5d6d-nxvlb started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container tectonic-node-controller-operator ready: true, restart count 0 Jul 9 19:36:40.955: INFO: kube-core-operator-75d546fbbb-c7ctx started at 2018-07-09 13:18:11 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container kube-core-operator ready: true, restart count 0 Jul 9 19:36:40.955: INFO: tectonic-utility-operator-786b69fc8b-4xffz started at 2018-07-09 13:18:13 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container tectonic-utility-operator ready: true, restart count 0 Jul 9 19:36:40.955: INFO: kube-addon-operator-675f99d7f8-c6pdt started at 2018-07-09 13:18:12 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container kube-addon-operator ready: true, restart count 0 Jul 9 19:36:40.955: INFO: pod-checkpointer-4882g-ip-10-0-35-213.us-west-2.compute.internal started at (0+0 container statuses recorded) Jul 9 19:36:40.955: INFO: kube-controller-manager-558dc6fb98-q6vr5 started at 2018-07-09 13:16:08 -0700 PDT (0+1 container statuses recorded) Jul 9 19:36:40.955: INFO: Container kube-controller-manager ready: true, restart count 1 W0709 19:36:41.006988 11716 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 9 19:36:41.201: INFO: Latency metrics for node ip-10-0-35-213.us-west-2.compute.internal STEP: Dumping a list of prepulled images on each node... Jul 9 19:36:41.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sysctl-ppq4n" for this suite. Jul 9 19:36:47.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:36:50.679: INFO: namespace: e2e-tests-sysctl-ppq4n, resource: bindings, ignored listing per whitelist Jul 9 19:36:52.205: INFO: namespace e2e-tests-sysctl-ppq4n deletion completed in 10.874488914s • Failure [49.677 seconds] [k8s.io] Sysctls /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should support unsafe sysctls which are actually whitelisted [Suite:openshift/conformance/parallel] [Suite:k8s] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:101 Expected : kernel.shm_rmid_forced = 0 to contain substring : kernel.shm_rmid_forced = 1 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:139 ------------------------------ S ------------------------------ [Feature:Builds][timing] capture build stages and durations should record build stages and durations for docker [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:82 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:36:00.008: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:36:01.680: INFO: configPath is now "/tmp/e2e-test-build-timing-tq4dt-user.kubeconfig" Jul 9 19:36:01.680: INFO: The user is now "e2e-test-build-timing-tq4dt-user" Jul 9 19:36:01.680: INFO: Creating project "e2e-test-build-timing-tq4dt" Jul 9 19:36:01.820: INFO: Waiting on permissions in project "e2e-test-build-timing-tq4dt" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:41 Jul 9 19:36:01.884: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:45 STEP: waiting for builder service account [It] should record build stages and durations for docker [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:82 STEP: creating test image stream Jul 9 19:36:02.029: INFO: Running 'oc create --config=/tmp/e2e-test-build-timing-tq4dt-user.kubeconfig --namespace=e2e-test-build-timing-tq4dt -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-timing/test-is.json' imagestream.image.openshift.io "test" created STEP: creating test build config Jul 9 19:36:02.385: INFO: Running 'oc create --config=/tmp/e2e-test-build-timing-tq4dt-user.kubeconfig --namespace=e2e-test-build-timing-tq4dt -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-timing/test-docker-build.json' buildconfig.build.openshift.io "test" created STEP: starting the test docker build Jul 9 19:36:02.689: INFO: Running 'oc start-build --config=/tmp/e2e-test-build-timing-tq4dt-user.kubeconfig --namespace=e2e-test-build-timing-tq4dt test --from-file /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-timing/Dockerfile -o=name' Jul 9 19:36:05.847: INFO: start-build output with args [test --from-file /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-timing/Dockerfile -o=name]: Error> StdOut> build/test-1 StdErr> Uploading file "/tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-timing/Dockerfile" as binary input for the build ... Jul 9 19:36:05.849: INFO: Waiting for test-1 to complete Jul 9 19:36:52.000: INFO: Done waiting for test-1: util.BuildResult{BuildPath:"build/test-1", BuildName:"test-1", StartBuildStdErr:"Uploading file \"/tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-timing/Dockerfile\" as binary input for the build ...", StartBuildStdOut:"build/test-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4210ba300), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4201d3e00)} with error: [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:51 [AfterEach] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:36:52.063: INFO: namespace : e2e-test-build-timing-tq4dt api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:36:58.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:58.127 seconds] [Feature:Builds][timing] capture build stages and durations /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:29 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:40 should record build stages and durations for docker [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:82 ------------------------------ SS ------------------------------ [sig-storage] Projected optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:35:16.779: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:35:18.320: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-gqsgp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858 [It] optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 Jul 9 19:35:19.012: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node STEP: Creating secret with name s-test-opt-del-e2c7259c-83e9-11e8-bd2e-28d244b00276 STEP: Creating secret with name s-test-opt-upd-e2c72615-83e9-11e8-bd2e-28d244b00276 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e2c7259c-83e9-11e8-bd2e-28d244b00276 STEP: Updating secret s-test-opt-upd-e2c72615-83e9-11e8-bd2e-28d244b00276 STEP: Creating secret with name s-test-opt-create-e2c72648-83e9-11e8-bd2e-28d244b00276 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:36:35.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gqsgp" for this suite. Jul 9 19:36:59.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:37:02.155: INFO: namespace: e2e-tests-projected-gqsgp, resource: bindings, ignored listing per whitelist Jul 9 19:37:03.624: INFO: namespace e2e-tests-projected-gqsgp deletion completed in 28.109739586s • [SLOW TEST:106.845 seconds] [sig-storage] Projected /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34 optional updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:36:58.140: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:36:59.660: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-mp4r7 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should provide container's memory request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:37:00.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f333512-83ea-11e8-8401-28d244b00276" in namespace "e2e-tests-downward-api-mp4r7" to be "success or failure" Jul 9 19:37:00.463: INFO: Pod "downwardapi-volume-1f333512-83ea-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 32.372176ms Jul 9 19:37:02.543: INFO: Pod "downwardapi-volume-1f333512-83ea-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.111864855s STEP: Saw pod success Jul 9 19:37:02.543: INFO: Pod "downwardapi-volume-1f333512-83ea-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:37:02.574: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-1f333512-83ea-11e8-8401-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:37:02.668: INFO: Waiting for pod downwardapi-volume-1f333512-83ea-11e8-8401-28d244b00276 to disappear Jul 9 19:37:02.714: INFO: Pod downwardapi-volume-1f333512-83ea-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:02.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mp4r7" for this suite. Jul 9 19:37:08.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:37:11.961: INFO: namespace: e2e-tests-downward-api-mp4r7, resource: bindings, ignored listing per whitelist Jul 9 19:37:12.507: INFO: namespace e2e-tests-downward-api-mp4r7 deletion completed in 9.747708553s • [SLOW TEST:14.368 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should provide container's memory request [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Conformance][templates] templateinstance security tests should pass security tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:119 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][templates] templateinstance security tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:36:52.207: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][templates] templateinstance security tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:36:54.490: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-user.kubeconfig" Jul 9 19:36:54.490: INFO: The user is now "e2e-test-templates-wt4pc-user" Jul 9 19:36:54.490: INFO: Creating project "e2e-test-templates-wt4pc" Jul 9 19:36:54.629: INFO: Waiting on permissions in project "e2e-test-templates-wt4pc" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:72 Jul 9 19:36:55.407: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-adminuser.kubeconfig" Jul 9 19:36:55.672: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-edituser.kubeconfig" Jul 9 19:36:56.015: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-editbygroupuser.kubeconfig" [It] should pass security tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:119 STEP: checking edituser can create an object in a permitted namespace Jul 9 19:36:56.285: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-edituser.kubeconfig" STEP: checking editbygroupuser can create an object in a permitted namespace Jul 9 19:36:57.253: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-editbygroupuser.kubeconfig" STEP: checking edituser can't create an object in a non-permitted namespace Jul 9 19:36:58.203: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-edituser.kubeconfig" STEP: checking editbygroupuser can't create an object in a non-permitted namespace Jul 9 19:36:59.132: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-editbygroupuser.kubeconfig" STEP: checking edituser can't create an object that requires admin Jul 9 19:36:59.993: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-edituser.kubeconfig" STEP: checking editbygroupuser can't create an object that requires admin Jul 9 19:37:00.950: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-editbygroupuser.kubeconfig" STEP: checking adminuser can create an object that requires admin Jul 9 19:37:01.896: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-adminuser.kubeconfig" STEP: checking adminuser can't create an object that requires more than admin Jul 9 19:37:02.852: INFO: configPath is now "/tmp/e2e-test-templates-wt4pc-adminuser.kubeconfig" [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:104 TemplateInstances: []template.TemplateInstance{template.TemplateInstance{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"templateinstance", GenerateName:"", Namespace:"e2e-test-templates-wt4pc", SelfLink:"/apis/template.openshift.io/v1/namespaces/e2e-test-templates-wt4pc/templateinstances/templateinstance", UID:"20b4d9b5-83ea-11e8-aa51-0af96768d57e", ResourceVersion:"93648", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787022, loc:(*time.Location)(0x6b11480)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string{"template.openshift.io/finalizer"}, ClusterName:""}, Spec:template.TemplateInstanceSpec{Template:template.Template{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"template", GenerateName:"", Namespace:"e2e-test-templates-wt4pc", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Message:"", Parameters:[]template.Parameter{template.Parameter{Name:"NAMESPACE", DisplayName:"", Description:"", Value:"", Generate:"", From:"", Required:true}}, Objects:[]runtime.Object{(*runtime.Unknown)(0xc421fbd6e0)}, ObjectLabels:map[string]string(nil)}, Secret:(*core.LocalObjectReference)(0xc421db4b00), Requester:(*template.TemplateInstanceRequester)(0xc4222b2880)}, Status:template.TemplateInstanceStatus{Conditions:[]template.TemplateInstanceCondition{template.TemplateInstanceCondition{Type:"Ready", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63666787023, loc:(*time.Location)(0x6b11480)}}, Reason:"Created", Message:""}}, Objects:[]template.TemplateInstanceObject(nil)}}}[AfterEach] [Conformance][templates] templateinstance security tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:37:03.635: INFO: namespace : e2e-test-templates-wt4pc api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][templates] templateinstance security tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:37:17.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [25.592 seconds] [Conformance][templates] templateinstance security tests /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:30 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:71 should pass security tests [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:119 Expected : false to equal : true /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:273 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:12.508: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:37:14.111: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-j4txq STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating configMap with name configmap-test-volume-27df35f7-83ea-11e8-8401-28d244b00276 STEP: Creating a pod to test consume configMaps Jul 9 19:37:15.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-27e40c9a-83ea-11e8-8401-28d244b00276" in namespace "e2e-tests-configmap-j4txq" to be "success or failure" Jul 9 19:37:15.040: INFO: Pod "pod-configmaps-27e40c9a-83ea-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 29.356768ms Jul 9 19:37:17.083: INFO: Pod "pod-configmaps-27e40c9a-83ea-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.072412031s STEP: Saw pod success Jul 9 19:37:17.083: INFO: Pod "pod-configmaps-27e40c9a-83ea-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:37:17.126: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod pod-configmaps-27e40c9a-83ea-11e8-8401-28d244b00276 container configmap-volume-test: STEP: delete the pod Jul 9 19:37:17.204: INFO: Waiting for pod pod-configmaps-27e40c9a-83ea-11e8-8401-28d244b00276 to disappear Jul 9 19:37:17.253: INFO: Pod pod-configmaps-27e40c9a-83ea-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:17.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-j4txq" for this suite. Jul 9 19:37:23.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:37:26.713: INFO: namespace: e2e-tests-configmap-j4txq, resource: bindings, ignored listing per whitelist Jul 9 19:37:27.243: INFO: namespace e2e-tests-configmap-j4txq deletion completed in 9.941792722s • [SLOW TEST:14.734 seconds] [sig-storage] ConfigMap /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:17.802: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:37:20.257: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-5mpfh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should set DefaultMode on files [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward API volume plugin Jul 9 19:37:21.165: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b8baa63-83ea-11e8-881a-28d244b00276" in namespace "e2e-tests-downward-api-5mpfh" to be "success or failure" Jul 9 19:37:21.246: INFO: Pod "downwardapi-volume-2b8baa63-83ea-11e8-881a-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 80.412911ms Jul 9 19:37:23.376: INFO: Pod "downwardapi-volume-2b8baa63-83ea-11e8-881a-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.210893087s STEP: Saw pod success Jul 9 19:37:23.376: INFO: Pod "downwardapi-volume-2b8baa63-83ea-11e8-881a-28d244b00276" satisfied condition "success or failure" Jul 9 19:37:23.415: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downwardapi-volume-2b8baa63-83ea-11e8-881a-28d244b00276 container client-container: STEP: delete the pod Jul 9 19:37:23.602: INFO: Waiting for pod downwardapi-volume-2b8baa63-83ea-11e8-881a-28d244b00276 to disappear Jul 9 19:37:23.652: INFO: Pod downwardapi-volume-2b8baa63-83ea-11e8-881a-28d244b00276 no longer exists [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:23.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5mpfh" for this suite. Jul 9 19:37:31.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:37:36.869: INFO: namespace: e2e-tests-downward-api-5mpfh, resource: bindings, ignored listing per whitelist Jul 9 19:37:37.035: INFO: namespace e2e-tests-downward-api-5mpfh deletion completed in 13.328936796s • [SLOW TEST:19.233 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should set DefaultMode on files [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Feature:Builds][Conformance] s2i build with a root user image should create a root build and pass with a privileged SCC [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:79 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:36:11.690: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:36:13.808: INFO: configPath is now "/tmp/e2e-test-s2i-build-root-q6gm8-user.kubeconfig" Jul 9 19:36:13.808: INFO: The user is now "e2e-test-s2i-build-root-q6gm8-user" Jul 9 19:36:13.808: INFO: Creating project "e2e-test-s2i-build-root-q6gm8" Jul 9 19:36:13.944: INFO: Waiting on permissions in project "e2e-test-s2i-build-root-q6gm8" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:24 Jul 9 19:36:14.004: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:28 STEP: waiting for builder service account STEP: creating a root build container Jul 9 19:36:14.143: INFO: Running 'oc new-build --config=/tmp/e2e-test-s2i-build-root-q6gm8-user.kubeconfig --namespace=e2e-test-s2i-build-root-q6gm8 -D FROM centos/nodejs-6-centos7 USER 0 --name nodejsroot' --> Found Docker image 7e95117 (2 weeks old) from Docker Hub for "centos/nodejs-6-centos7" Node.js 6 --------- Node.js 6 available as container is a base platform for building and running various Node.js 6 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Tags: builder, nodejs, nodejs6 * An image stream will be created as "nodejs-6-centos7:latest" that will track the source image * A Docker build using a predefined Dockerfile will be created * The resulting image will be pushed to image stream "nodejsroot:latest" * Every time "nodejs-6-centos7:latest" changes a new build will be triggered --> Creating resources with label build=nodejsroot ... imagestream "nodejs-6-centos7" created imagestream "nodejsroot" created buildconfig "nodejsroot" created --> Success Build configuration "nodejsroot" created and build triggered. Run 'oc logs -f bc/nodejsroot' to stream the build progress. [It] should create a root build and pass with a privileged SCC [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:79 STEP: adding builder account to privileged SCC Jul 9 19:36:56.984: INFO: Running 'oc new-app --config=/tmp/e2e-test-s2i-build-root-q6gm8-user.kubeconfig --namespace=e2e-test-s2i-build-root-q6gm8 nodejsroot~https://github.com/openshift/nodejs-ex --name nodejspass' --> Found image 4303837 (19 seconds old) in image stream "e2e-test-s2i-build-root-q6gm8/nodejsroot" under tag "latest" for "nodejsroot" Node.js 6 --------- Node.js 6 available as container is a base platform for building and running various Node.js 6 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Tags: builder, nodejs, nodejs6 * A source build using source code from https://github.com/openshift/nodejs-ex will be created * The resulting image will be pushed to image stream "nodejspass:latest" * Use 'start-build' to trigger a new build * This image will be deployed in deployment config "nodejspass" * Port 8080/tcp will be load balanced by service "nodejspass" * Other containers can access this service through the hostname "nodejspass" * WARNING: Image "e2e-test-s2i-build-root-q6gm8/nodejsroot:latest" runs as the 'root' user which may not be permitted by your cluster administrator --> Creating resources ... imagestream "nodejspass" created buildconfig "nodejspass" created deploymentconfig "nodejspass" created service "nodejspass" created --> Success Build scheduled, use 'oc logs -f bc/nodejspass' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/nodejspass' Run 'oc status' to view your app. [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:41 [AfterEach] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:37:24.064: INFO: namespace : e2e-test-s2i-build-root-q6gm8 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:38.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:86.465 seconds] [Feature:Builds][Conformance] s2i build with a root user image /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:16 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:23 should create a root build and pass with a privileged SCC [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:79 ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] basic functionality /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:37.036: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-services1-r56br STEP: Waiting for a default service account to be provisioned in namespace [It] should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 Jul 9 19:37:39.583: INFO: Only one node is available in this environment ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) [AfterEach] basic functionality /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:39.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-services1-r56br" for this suite. Jul 9 19:37:45.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:37:49.082: INFO: namespace: e2e-tests-net-services1-r56br, resource: bindings, ignored listing per whitelist Jul 9 19:37:50.558: INFO: namespace e2e-tests-net-services1-r56br deletion completed in 10.93015029s S [SKIPPING] [13.522 seconds] [Area:Networking] services /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:10 basic functionality /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:11 should allow connections to another pod on a different node via a service IP [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:18 Jul 9 19:37:39.583: Only one node is available in this environment ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ [k8s.io] PrivilegedPod should enable privileged commands [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/privileged.go:47 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] PrivilegedPod /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:03.625: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:37:05.354: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-e2e-privileged-pod-4wxmq STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/privileged.go:47 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container Jul 9 19:37:08.138: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-tests-e2e-privileged-pod-4wxmq PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:37:08.138: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:37:08.666: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-tests-e2e-privileged-pod-4wxmq PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:37:08.666: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Executing in the non-privileged container Jul 9 19:37:09.097: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-tests-e2e-privileged-pod-4wxmq PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 9 19:37:09.097: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [AfterEach] [k8s.io] PrivilegedPod /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:09.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-privileged-pod-4wxmq" for this suite. Jul 9 19:37:49.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:37:52.098: INFO: namespace: e2e-tests-e2e-privileged-pod-4wxmq, resource: bindings, ignored listing per whitelist Jul 9 19:37:53.066: INFO: namespace e2e-tests-e2e-privileged-pod-4wxmq deletion completed in 43.555800856s • [SLOW TEST:49.440 seconds] [k8s.io] PrivilegedPod /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should enable privileged commands [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/privileged.go:47 ------------------------------ SSS ------------------------------ [Feature:Builds][Conformance] s2i build with a quota Building from a template should create an s2i build with a quota and run it [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:45 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds][Conformance] s2i build with a quota /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:27.245: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds][Conformance] s2i build with a quota /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:37:28.933: INFO: configPath is now "/tmp/e2e-test-s2i-build-quota-k4tn8-user.kubeconfig" Jul 9 19:37:28.933: INFO: The user is now "e2e-test-s2i-build-quota-k4tn8-user" Jul 9 19:37:28.933: INFO: Creating project "e2e-test-s2i-build-quota-k4tn8" Jul 9 19:37:29.452: INFO: Waiting on permissions in project "e2e-test-s2i-build-quota-k4tn8" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:27 Jul 9 19:37:29.548: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:31 STEP: waiting for builder service account [It] should create an s2i build with a quota and run it [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:45 STEP: calling oc create -f "/tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/test-s2i-build-quota.json" Jul 9 19:37:30.378: INFO: Running 'oc create --config=/tmp/e2e-test-s2i-build-quota-k4tn8-user.kubeconfig --namespace=e2e-test-s2i-build-quota-k4tn8 -f /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/test-s2i-build-quota.json' buildconfig.build.openshift.io "s2i-build-quota" created STEP: starting a test build Jul 9 19:37:30.677: INFO: Running 'oc start-build --config=/tmp/e2e-test-s2i-build-quota-k4tn8-user.kubeconfig --namespace=e2e-test-s2i-build-quota-k4tn8 s2i-build-quota --from-dir /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-quota -o=name' Jul 9 19:37:34.058: INFO: start-build output with args [s2i-build-quota --from-dir /tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-quota -o=name]: Error> StdOut> build/s2i-build-quota-1 StdErr> Uploading directory "/tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-quota" as binary input for the build ... Jul 9 19:37:34.059: INFO: Waiting for s2i-build-quota-1 to complete Jul 9 19:37:50.164: INFO: Done waiting for s2i-build-quota-1: util.BuildResult{BuildPath:"build/s2i-build-quota-1", BuildName:"s2i-build-quota-1", StartBuildStdErr:"Uploading directory \"/tmp/fixture-testdata-dir180677416/test/extended/testdata/builds/build-quota\" as binary input for the build ...", StartBuildStdOut:"build/s2i-build-quota-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421fcb800), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc4200e3d10)} with error: STEP: expecting the build logs to contain the correct cgroups values Jul 9 19:37:50.164: INFO: Running 'oc logs --config=/tmp/e2e-test-s2i-build-quota-k4tn8-user.kubeconfig --namespace=e2e-test-s2i-build-quota-k4tn8 -f build/s2i-build-quota-1 --timestamps' Jul 9 19:37:50.683: INFO: Found event v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"s2i-build-quota-1.153fe16c61196341", GenerateName:"", Namespace:"e2e-test-s2i-build-quota-k4tn8", SelfLink:"/api/v1/namespaces/e2e-test-s2i-build-quota-k4tn8/events/s2i-build-quota-1.153fe16c61196341", UID:"33324943-83ea-11e8-84c6-0af96768d57e", ResourceVersion:"94141", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787053, loc:(*time.Location)(0x6b11480)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Build", Namespace:"e2e-test-s2i-build-quota-k4tn8", Name:"s2i-build-quota-1", UID:"320520fb-83ea-11e8-aa51-0af96768d57e", APIVersion:"build.openshift.io/v1", ResourceVersion:"94140", FieldPath:""}, Reason:"BuildStarted", Message:"Build e2e-test-s2i-build-quota-k4tn8/s2i-build-quota-1 is now running", Source:v1.EventSource{Component:"build-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787053, loc:(*time.Location)(0x6b11480)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787053, loc:(*time.Location)(0x6b11480)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""} Jul 9 19:37:50.737: INFO: Found event v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"s2i-build-quota-1.153fe16c61196341", GenerateName:"", Namespace:"e2e-test-s2i-build-quota-k4tn8", SelfLink:"/api/v1/namespaces/e2e-test-s2i-build-quota-k4tn8/events/s2i-build-quota-1.153fe16c61196341", UID:"33324943-83ea-11e8-84c6-0af96768d57e", ResourceVersion:"94141", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787053, loc:(*time.Location)(0x6b11480)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Build", Namespace:"e2e-test-s2i-build-quota-k4tn8", Name:"s2i-build-quota-1", UID:"320520fb-83ea-11e8-aa51-0af96768d57e", APIVersion:"build.openshift.io/v1", ResourceVersion:"94140", FieldPath:""}, Reason:"BuildStarted", Message:"Build e2e-test-s2i-build-quota-k4tn8/s2i-build-quota-1 is now running", Source:v1.EventSource{Component:"build-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787053, loc:(*time.Location)(0x6b11480)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787053, loc:(*time.Location)(0x6b11480)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""} Jul 9 19:37:50.737: INFO: Found event v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"s2i-build-quota-1.153fe16fa1aef348", GenerateName:"", Namespace:"e2e-test-s2i-build-quota-k4tn8", SelfLink:"/api/v1/namespaces/e2e-test-s2i-build-quota-k4tn8/events/s2i-build-quota-1.153fe16fa1aef348", UID:"3b85b6b2-83ea-11e8-84c6-0af96768d57e", ResourceVersion:"94280", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787067, loc:(*time.Location)(0x6b11480)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Build", Namespace:"e2e-test-s2i-build-quota-k4tn8", Name:"s2i-build-quota-1", UID:"320520fb-83ea-11e8-aa51-0af96768d57e", APIVersion:"build.openshift.io/v1", ResourceVersion:"94279", FieldPath:""}, Reason:"BuildCompleted", Message:"Build e2e-test-s2i-build-quota-k4tn8/s2i-build-quota-1 completed successfully", Source:v1.EventSource{Component:"build-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787067, loc:(*time.Location)(0x6b11480)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63666787067, loc:(*time.Location)(0x6b11480)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""} [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:37 [AfterEach] [Feature:Builds][Conformance] s2i build with a quota /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:37:50.798: INFO: namespace : e2e-test-s2i-build-quota-k4tn8 api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds][Conformance] s2i build with a quota /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:56.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:29.730 seconds] [Feature:Builds][Conformance] s2i build with a quota /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:14 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:26 Building from a template /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:44 should create an s2i build with a quota and run it [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:45 ------------------------------ S ------------------------------ [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:431 Jul 9 19:37:56.977: INFO: This plugin does not implement NetworkPolicy. [AfterEach] when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:56.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:48 when using a plugin that implements NetworkPolicy /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:430 should enforce policy based on Ports [Feature:OSNetworkPolicy] [Suite:openshift/conformance/parallel] [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:132 Jul 9 19:37:56.977: This plugin does not implement NetworkPolicy. /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ S ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with env in params referencing the configmap [Conformance] should expand the config map key to a value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:486 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:53.069: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:37:54.658: INFO: configPath is now "/tmp/e2e-test-cli-deployment-x4l2g-user.kubeconfig" Jul 9 19:37:54.658: INFO: The user is now "e2e-test-cli-deployment-x4l2g-user" Jul 9 19:37:54.658: INFO: Creating project "e2e-test-cli-deployment-x4l2g" Jul 9 19:37:54.802: INFO: Waiting on permissions in project "e2e-test-cli-deployment-x4l2g" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should expand the config map key to a value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:486 Jul 9 19:37:54.843: INFO: Running 'oc create --config=/tmp/e2e-test-cli-deployment-x4l2g-user.kubeconfig --namespace=e2e-test-cli-deployment-x4l2g configmap test --from-literal=foo=bar' Jul 9 19:37:55.322: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-x4l2g-user.kubeconfig --namespace=e2e-test-cli-deployment-x4l2g latest dc/deployment-simple' Jul 9 19:37:55.702: INFO: Running 'oc rollout --config=/tmp/e2e-test-cli-deployment-x4l2g-user.kubeconfig --namespace=e2e-test-cli-deployment-x4l2g status dc/deployment-simple' Jul 9 19:38:02.559: INFO: Error running &{/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/e2e-test-cli-deployment-x4l2g-user.kubeconfig --namespace=e2e-test-cli-deployment-x4l2g status dc/deployment-simple] [] Waiting for rollout to finish: 0 out of 1 new replicas have been updated... Waiting for rollout to finish: 0 out of 1 new replicas have been updated... error: replication controller "deployment-simple-1" has failed progressing Waiting for rollout to finish: 0 out of 1 new replicas have been updated... Waiting for rollout to finish: 0 out of 1 new replicas have been updated... error: replication controller "deployment-simple-1" has failed progressing [] 0xc420b370b0 exit status 1 true [0xc421961460 0xc421961498 0xc421961498] [0xc421961460 0xc421961498] [0xc421961468 0xc421961490] [0x916090 0x916190] 0xc4214ddb00 }: Waiting for rollout to finish: 0 out of 1 new replicas have been updated... Waiting for rollout to finish: 0 out of 1 new replicas have been updated... error: replication controller "deployment-simple-1" has failed progressing Jul 9 19:38:02.560: INFO: Running 'oc logs --config=/tmp/e2e-test-cli-deployment-x4l2g-user.kubeconfig --namespace=e2e-test-cli-deployment-x4l2g dc/deployment-simple' [AfterEach] with env in params referencing the configmap [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:482 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:38:05.011: INFO: namespace : e2e-test-cli-deployment-x4l2g api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:38:11.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:18.015 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with env in params referencing the configmap [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:480 should expand the config map key to a value [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:486 ------------------------------ [Feature:DeploymentConfig] deploymentconfigs with custom deployments [Conformance] should run the custom deployment steps [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:572 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:50.559: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:37:52.741: INFO: configPath is now "/tmp/e2e-test-cli-deployment-n2gpb-user.kubeconfig" Jul 9 19:37:52.741: INFO: The user is now "e2e-test-cli-deployment-n2gpb-user" Jul 9 19:37:52.741: INFO: Creating project "e2e-test-cli-deployment-n2gpb" Jul 9 19:37:52.868: INFO: Waiting on permissions in project "e2e-test-cli-deployment-n2gpb" ... [JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43 [It] should run the custom deployment steps [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:572 Jul 9 19:37:55.480: INFO: Running 'oc deploy --config=/tmp/e2e-test-cli-deployment-n2gpb-user.kubeconfig --namespace=e2e-test-cli-deployment-n2gpb --follow dc/custom-deployment' STEP: verifying the deployment is marked complete Jul 9 19:38:05.126: INFO: Latest rollout of dc/custom-deployment (rc/custom-deployment-1) is complete. STEP: checking the logs for substrings Command "deploy" is deprecated, Use the `rollout latest` and `rollout cancel` commands instead. --> pre: Running hook pod ... test pre hook executed --> pre: Success --> Scaling custom-deployment-1 to 2 --> Reached 50% Halfway --> pre: Hook pod already succeeded --> Success Finished [AfterEach] with custom deployments [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:568 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62 [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:38:07.224: INFO: namespace : e2e-test-cli-deployment-n2gpb api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:38:13.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:22.754 seconds] [Feature:DeploymentConfig] deploymentconfigs /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37 with custom deployments [Conformance] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:566 should run the custom deployment steps [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:572 ------------------------------ SS ------------------------------ [sig-api-machinery] Downward API should provide pod name, namespace and IP address as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:56.979: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:37:58.607: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-xbdv9 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating a pod to test downward api env vars Jul 9 19:37:59.279: INFO: Waiting up to 5m0s for pod "downward-api-42464c9b-83ea-11e8-8401-28d244b00276" in namespace "e2e-tests-downward-api-xbdv9" to be "success or failure" Jul 9 19:37:59.307: INFO: Pod "downward-api-42464c9b-83ea-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 27.903836ms Jul 9 19:38:01.337: INFO: Pod "downward-api-42464c9b-83ea-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058753196s Jul 9 19:38:03.369: INFO: Pod "downward-api-42464c9b-83ea-11e8-8401-28d244b00276": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09014627s Jul 9 19:38:05.409: INFO: Pod "downward-api-42464c9b-83ea-11e8-8401-28d244b00276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130078366s STEP: Saw pod success Jul 9 19:38:05.409: INFO: Pod "downward-api-42464c9b-83ea-11e8-8401-28d244b00276" satisfied condition "success or failure" Jul 9 19:38:05.441: INFO: Trying to get logs from node ip-10-0-130-54.us-west-2.compute.internal pod downward-api-42464c9b-83ea-11e8-8401-28d244b00276 container dapi-container: STEP: delete the pod Jul 9 19:38:05.516: INFO: Waiting for pod downward-api-42464c9b-83ea-11e8-8401-28d244b00276 to disappear Jul 9 19:38:05.545: INFO: Pod downward-api-42464c9b-83ea-11e8-8401-28d244b00276 no longer exists [AfterEach] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:38:05.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xbdv9" for this suite. Jul 9 19:38:11.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:38:13.488: INFO: namespace: e2e-tests-downward-api-xbdv9, resource: bindings, ignored listing per whitelist Jul 9 19:38:15.017: INFO: namespace e2e-tests-downward-api-xbdv9 deletion completed in 9.430745534s • [SLOW TEST:18.038 seconds] [sig-api-machinery] Downward API /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:37 should provide pod name, namespace and IP address as env vars [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ [Feature:Builds] forcePull should affect pulling builder images ForcePull test case execution custom [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:121 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:38:13.316: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:38:15.456: INFO: configPath is now "/tmp/e2e-test-forcepull-xdlbl-user.kubeconfig" Jul 9 19:38:15.457: INFO: The user is now "e2e-test-forcepull-xdlbl-user" Jul 9 19:38:15.457: INFO: Creating project "e2e-test-forcepull-xdlbl" Jul 9 19:38:15.581: INFO: Waiting on permissions in project "e2e-test-forcepull-xdlbl" ... [BeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:68 Jul 9 19:38:15.635: INFO: docker info output: Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 4 Server Version: 1.13.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 20 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1) runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: apparmor seccomp Profile: default Kernel Version: 4.4.0-128-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.495 GiB Name: yifan-coreos ID: ORNN:EABZ:BH7E:KUJC:6IIM:25VE:YORH:CRWG:6TBV:UF4G:AZXO:BS6Z Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Username: yifan Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false STEP: granting system:build-strategy-custom Jul 9 19:38:15.635: INFO: Running 'oc create --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-forcepull-xdlbl clusterrolebinding custombuildaccess-e2e-test-forcepull-xdlbl-user --clusterrole system:build-strategy-custom --user e2e-test-forcepull-xdlbl-user' clusterrolebinding.rbac.authorization.k8s.io "custombuildaccess-e2e-test-forcepull-xdlbl-user" created STEP: waiting for openshift/ruby:latest ImageStreamTag STEP: waiting for an is importer to import a tag latest into a stream ruby STEP: create application build configs for 3 strategies Jul 9 19:38:16.048: INFO: Running 'oc create --config=/tmp/e2e-test-forcepull-xdlbl-user.kubeconfig --namespace=e2e-test-forcepull-xdlbl -f /tmp/fixture-testdata-dir574852015/test/extended/testdata/forcepull-test.json' buildconfig.build.openshift.io "ruby-sample-build-tc" created buildconfig.build.openshift.io "ruby-sample-build-td" created buildconfig.build.openshift.io "ruby-sample-build-ts" created [JustBeforeEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:99 STEP: waiting for builder service account [It] ForcePull test case execution custom [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:121 STEP: when custom force pull is true Jul 9 19:38:16.616: INFO: Running 'oc start-build --config=/tmp/e2e-test-forcepull-xdlbl-user.kubeconfig --namespace=e2e-test-forcepull-xdlbl ruby-sample-build-tc -o=name' Jul 9 19:38:16.900: INFO: start-build output with args [ruby-sample-build-tc -o=name]: Error> StdOut> build/ruby-sample-build-tc-1 StdErr> Jul 9 19:38:16.901: INFO: Waiting for ruby-sample-build-tc-1 to complete Jul 9 19:38:23.008: INFO: Done waiting for ruby-sample-build-tc-1: util.BuildResult{BuildPath:"build/ruby-sample-build-tc-1", BuildName:"ruby-sample-build-tc-1", StartBuildStdErr:"", StartBuildStdOut:"build/ruby-sample-build-tc-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc4210f1b00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc42004c960)} with error: Jul 9 19:38:23.008: INFO: Running 'oc get --config=/tmp/e2e-test-forcepull-xdlbl-user.kubeconfig --namespace=e2e-test-forcepull-xdlbl pods ruby-sample-build-tc-1-build -o jsonpath='{.spec.containers[0].imagePullPolicy}'' [AfterEach] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:87 Jul 9 19:38:23.278: INFO: Running 'oc delete --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-forcepull-xdlbl clusterrolebinding custombuildaccess-e2e-test-forcepull-xdlbl-user' clusterrolebinding.rbac.authorization.k8s.io "custombuildaccess-e2e-test-forcepull-xdlbl-user" deleted [AfterEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:38:23.755: INFO: namespace : e2e-test-forcepull-xdlbl api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:38:29.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:16.549 seconds] [Feature:Builds] forcePull should affect pulling builder images /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:62 /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:66 ForcePull test case execution custom [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:121 ------------------------------ Jul 9 19:38:29.866: INFO: Running AfterSuite actions on all node [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:407 [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:38:15.018: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-isolation1-8lx7k STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:38:16.778: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-net-isolation2-8tk97 STEP: Waiting for a default service account to be provisioned in namespace [It] should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 Jul 9 19:38:18.598: INFO: Only one node is available in this environment ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) [AfterEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:38:18.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation1-8lx7k" for this suite. Jul 9 19:38:24.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:38:27.925: INFO: namespace: e2e-tests-net-isolation1-8lx7k, resource: bindings, ignored listing per whitelist Jul 9 19:38:28.175: INFO: namespace e2e-tests-net-isolation1-8lx7k deletion completed in 9.540839883s [AfterEach] when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:38:28.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-net-isolation2-8tk97" for this suite. Jul 9 19:38:34.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:38:36.922: INFO: namespace: e2e-tests-net-isolation2-8tk97, resource: bindings, ignored listing per whitelist Jul 9 19:38:37.913: INFO: namespace e2e-tests-net-isolation2-8tk97 deletion completed in 9.685399899s S [SKIPPING] [22.895 seconds] [Area:Networking] network isolation /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10 when using a plugin that does not isolate namespaces by default /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:406 should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19 Jul 9 19:38:18.598: Only one node is available in this environment ([ip-10-0-130-54.us-west-2.compute.internal] out of [ip-10-0-130-54.us-west-2.compute.internal]) /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:296 ------------------------------ Jul 9 19:38:37.915: INFO: Running AfterSuite actions on all node [k8s.io] Pods should support retrieving logs from the container over websockets [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:546 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:37:38.156: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:37:40.103: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-769kq STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should support retrieving logs from the container over websockets [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:546 Jul 9 19:37:40.899: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:37:45.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-769kq" for this suite. Jul 9 19:38:37.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:38:41.007: INFO: namespace: e2e-tests-pods-769kq, resource: bindings, ignored listing per whitelist Jul 9 19:38:41.728: INFO: namespace e2e-tests-pods-769kq deletion completed in 56.463942365s • [SLOW TEST:63.572 seconds] [k8s.io] Pods /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:669 should support retrieving logs from the container over websockets [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:546 ------------------------------ Jul 9 19:38:41.729: INFO: Running AfterSuite actions on all node [sig-storage] Downward API volume should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:38:11.087: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig STEP: Building a namespace api object Jul 9 19:38:12.694: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-f4jjc STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38 [It] should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 STEP: Creating the pod Jul 9 19:38:16.074: INFO: Successfully updated pod "annotationupdate4aa83994-83ea-11e8-bd2e-28d244b00276" [AfterEach] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Jul 9 19:38:18.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-f4jjc" for this suite. Jul 9 19:38:40.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 9 19:38:43.844: INFO: namespace: e2e-tests-downward-api-f4jjc, resource: bindings, ignored listing per whitelist Jul 9 19:38:43.917: INFO: namespace e2e-tests-downward-api-f4jjc deletion completed in 25.731268577s • [SLOW TEST:32.831 seconds] [sig-storage] Downward API volume /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33 should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:674 ------------------------------ Jul 9 19:38:43.919: INFO: Running AfterSuite actions on all node [Conformance][Area:Networking][Feature:Router] The HAProxy router should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:68 [BeforeEach] [Top Level] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Jul 9 19:36:17.600: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:83 Jul 9 19:36:19.413: INFO: configPath is now "/tmp/e2e-test-router-scoped-5l86w-user.kubeconfig" Jul 9 19:36:19.413: INFO: The user is now "e2e-test-router-scoped-5l86w-user" Jul 9 19:36:19.413: INFO: Creating project "e2e-test-router-scoped-5l86w" Jul 9 19:36:19.549: INFO: Waiting on permissions in project "e2e-test-router-scoped-5l86w" ... [BeforeEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:48 Jul 9 19:36:19.703: INFO: Running 'oc new-app --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-5l86w -f /tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "e2e-test-router-scoped-5l86w/" for "/tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml" to project e2e-test-router-scoped-5l86w * With parameters: * IMAGE=openshift/origin-haproxy-router * SCOPE=["--name=test-scoped", "--namespace=$(POD_NAMESPACE)", "--loglevel=4", "--labels=select=first"] --> Creating resources ... pod "router-scoped" created pod "router-override" created pod "router-override-domains" created rolebinding "system-router" created route "route-1" created route "route-2" created route "route-override-domain-1" created route "route-override-domain-2" created service "endpoints" created pod "endpoint-1" created --> Success Access your application via route 'first.example.com' Access your application via route 'second.example.com' Access your application via route 'y.a.null.ptr' Access your application via route 'main.void.str' Run 'oc status' to view your app. [It] should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:68 Jul 9 19:36:20.720: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir333495585/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Jul 9 19:36:29.873: INFO: Running '/home/yifan/gopher/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443 --kubeconfig=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig exec --namespace=e2e-test-router-scoped-5l86w execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 10.2.2.70' "http://10.2.2.70:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jul 9 19:39:35.897: INFO: stderr: "error 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\n" [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:36 Jul 9 19:39:35.980: INFO: Routes: NAME ROUTER HOST LAST TRANSITION route-1 router first.example.com 2018-07-09 19:36:20 -0700 PDT route-1 test-override-domains first.example.com 2018-07-09 19:36:28 -0700 PDT route-1 test-override route-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com 2018-07-09 19:36:31 -0700 PDT route-1 test-scoped first.example.com 2018-07-09 19:36:33 -0700 PDT route-2 router second.example.com 2018-07-09 19:36:20 -0700 PDT route-2 test-override-domains second.example.com 2018-07-09 19:36:28 -0700 PDT route-2 test-override route-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com 2018-07-09 19:36:32 -0700 PDT route-override-domain-1 router y.a.null.ptr 2018-07-09 19:36:20 -0700 PDT route-override-domain-1 test-override-domains route-override-domain-1-e2e-test-router-scoped-5l86w.apps.veto.test 2018-07-09 19:36:29 -0700 PDT route-override-domain-1 test-override route-override-domain-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com 2018-07-09 19:36:32 -0700 PDT route-override-domain-2 router main.void.str 2018-07-09 19:36:20 -0700 PDT route-override-domain-2 test-override-domains route-override-domain-2-e2e-test-router-scoped-5l86w.apps.veto.test 2018-07-09 19:36:29 -0700 PDT route-override-domain-2 test-override route-override-domain-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com 2018-07-09 19:36:32 -0700 PDT Jul 9 19:39:36.022: INFO: Running 'oc describe --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-5l86w pod/router-override' Jul 9 19:39:38.676: INFO: Describing pod "router-override" Name: router-override Namespace: e2e-test-router-scoped-5l86w Node: ip-10-0-130-54.us-west-2.compute.internal/10.0.130.54 Start Time: Mon, 09 Jul 2018 19:36:20 -0700 Labels: test=router-override Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=anyuid Status: Running IP: 10.2.2.71 Containers: router: Container ID: docker://efa569d611baf6896db6730d0c2f7b80e5dc7f127891fa475a36d46e36aac475 Image: openshift/origin-haproxy-router Image ID: docker-pullable://openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 Ports: 80/TCP, 443/TCP, 1936/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Args: --name=test-override --namespace=$(POD_NAMESPACE) --loglevel=4 --override-hostname --hostname-template=${name}-${namespace}.myapps.mycompany.com State: Running Started: Mon, 09 Jul 2018 19:36:24 -0700 Ready: True Restart Count: 0 Environment: POD_NAMESPACE: e2e-test-router-scoped-5l86w (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-nftbd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-nftbd: Type: Secret (a volume populated by a Secret) SecretName: default-token-nftbd Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned e2e-test-router-scoped-5l86w/router-override to ip-10-0-130-54.us-west-2.compute.internal Normal Pulled 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Container image "openshift/origin-haproxy-router" already present on machine Normal Created 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Created container Normal Started 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Started container Jul 9 19:39:38.676: INFO: Running 'oc logs --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-5l86w pod/router-override -c router -n e2e-test-router-scoped-5l86w' Jul 9 19:39:38.991: INFO: Log for pod "router-override"/"router" ----> I0710 02:36:27.011117 1 template.go:244] Starting template router (v3.11.0-alpha.0+cd9faee-274) I0710 02:36:27.011402 1 merged_client_builder.go:122] Using in-cluster configuration I0710 02:36:27.100465 1 merged_client_builder.go:122] Using in-cluster configuration I0710 02:36:27.111121 1 reflector.go:202] Starting reflector *core.Service (30m0s) from github.com/openshift/origin/pkg/router/template/service_lookup.go:32 I0710 02:36:27.111158 1 reflector.go:240] Listing and watching *core.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:32 I0710 02:36:27.433945 1 router.go:154] Creating a new template router, writing to /var/lib/haproxy/router I0710 02:36:27.480125 1 router.go:228] Template router will coalesce reloads within 5s of each other I0710 02:36:27.480162 1 router.go:278] Router default cert from router container I0710 02:36:27.480171 1 router.go:215] Reading persisted state I0710 02:36:27.480207 1 router.go:219] Committing state I0710 02:36:27.480216 1 router.go:333] Writing the router state I0710 02:36:27.480710 1 router.go:340] Writing the router config I0710 02:36:27.480723 1 router.go:397] Committing router certificate manager changes... I0710 02:36:27.480731 1 router.go:402] Router certificate manager config committed I0710 02:36:27.541038 1 router.go:354] Reloading the router I0710 02:36:31.448301 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). I0710 02:36:31.448342 1 router.go:250] Router is only using resources in namespace e2e-test-router-scoped-5l86w I0710 02:36:31.525483 1 reflector.go:202] Starting reflector *core.Endpoints (30m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:31.525514 1 reflector.go:240] Listing and watching *core.Endpoints from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:31.525998 1 reflector.go:202] Starting reflector *route.Route (30m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:31.795091 1 reflector.go:240] Listing and watching *route.Route from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:31.795059 1 shared_informer.go:123] caches populated I0710 02:36:31.916742 1 router.go:115] changing route first.example.com to route-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:31.922639 1 router.go:115] changing route second.example.com to route-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:31.922655 1 router.go:115] changing route y.a.null.ptr to route-override-domain-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:31.922665 1 router.go:115] changing route main.void.str to route-override-domain-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:31.997088 1 shared_informer.go:123] caches populated I0710 02:36:31.997129 1 plugin.go:168] Processing 1 Endpoints for e2e-test-router-scoped-5l86w/endpoints (ADDED) I0710 02:36:31.997146 1 plugin.go:171] Subset 0 : core.EndpointSubset{Addresses:[]core.EndpointAddress{core.EndpointAddress{IP:"10.2.2.72", Hostname:"", NodeName:(*string)(0xc4208305d0), TargetRef:(*core.ObjectReference)(0xc420585ce0)}}, NotReadyAddresses:[]core.EndpointAddress(nil), Ports:[]core.EndpointPort{core.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0710 02:36:31.997188 1 plugin.go:180] Modifying endpoints for e2e-test-router-scoped-5l86w/endpoints I0710 02:36:31.997239 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:31.997250 1 router_controller.go:238] Alias: route-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:31.997259 1 router_controller.go:240] Path: /Letter I0710 02:36:31.997266 1 router_controller.go:242] Event: ADDED rv=93248 I0710 02:36:31.997359 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-1 I0710 02:36:31.997372 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-2 -> endpoints 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:31.997380 1 router_controller.go:238] Alias: route-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:31.997387 1 router_controller.go:240] Path: /Letter I0710 02:36:31.997395 1 router_controller.go:242] Event: ADDED rv=93249 I0710 02:36:31.997431 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-2 I0710 02:36:31.997441 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-1 -> endpoints 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:31.997449 1 router_controller.go:238] Alias: route-override-domain-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:31.997457 1 router_controller.go:240] Path: /Letter I0710 02:36:31.997464 1 router_controller.go:242] Event: ADDED rv=93252 I0710 02:36:31.997496 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-override-domain-1 I0710 02:36:31.997505 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-2 -> endpoints 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:31.997513 1 router_controller.go:238] Alias: route-override-domain-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:31.997520 1 router_controller.go:240] Path: /Letter I0710 02:36:31.997526 1 router_controller.go:242] Event: ADDED rv=93253 I0710 02:36:31.997556 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-override-domain-2 I0710 02:36:31.997644 1 router_controller.go:50] Running router controller I0710 02:36:31.997653 1 router_controller.go:255] Router first sync complete I0710 02:36:31.997662 1 router.go:313] Router state synchronized for the first time I0710 02:36:31.997680 1 reaper.go:17] Launching reaper I0710 02:36:31.997757 1 writerlease.go:257] [1914865771] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.005648 1 plugin.go:168] Processing 1 Endpoints for e2e-test-router-scoped-5l86w/endpoints (ADDED) I0710 02:36:32.005667 1 plugin.go:171] Subset 0 : core.EndpointSubset{Addresses:[]core.EndpointAddress{core.EndpointAddress{IP:"10.2.2.72", Hostname:"", NodeName:(*string)(0xc4208305d0), TargetRef:(*core.ObjectReference)(0xc420585ce0)}}, NotReadyAddresses:[]core.EndpointAddress(nil), Ports:[]core.EndpointPort{core.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0710 02:36:32.005713 1 plugin.go:180] Modifying endpoints for e2e-test-router-scoped-5l86w/endpoints I0710 02:36:32.005784 1 router.go:751] Ignoring change for e2e-test-router-scoped-5l86w/endpoints, endpoints are the same I0710 02:36:32.005822 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-1 -> endpoints 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.005831 1 router_controller.go:238] Alias: route-override-domain-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.005838 1 router_controller.go:240] Path: /Letter I0710 02:36:32.005845 1 router_controller.go:242] Event: ADDED rv=93252 I0710 02:36:32.005894 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-2 -> endpoints 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.005902 1 router_controller.go:238] Alias: route-override-domain-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.005909 1 router_controller.go:240] Path: /Letter I0710 02:36:32.005916 1 router_controller.go:242] Event: ADDED rv=93253 I0710 02:36:32.005943 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.005952 1 router_controller.go:238] Alias: route-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.005958 1 router_controller.go:240] Path: /Letter I0710 02:36:32.005965 1 router_controller.go:242] Event: ADDED rv=93248 I0710 02:36:32.005989 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-2 -> endpoints 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.005997 1 router_controller.go:238] Alias: route-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.006003 1 router_controller.go:240] Path: /Letter I0710 02:36:32.006010 1 router_controller.go:242] Event: ADDED rv=93249 I0710 02:36:32.010293 1 router.go:333] Writing the router state I0710 02:36:32.015472 1 router.go:340] Writing the router config I0710 02:36:32.015500 1 router.go:397] Committing router certificate manager changes... I0710 02:36:32.015509 1 router.go:402] Router certificate manager config committed I0710 02:36:32.046791 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-1 I0710 02:36:32.076202 1 writerlease.go:290] [1914865771] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.076159795 +0000 UTC m=+67.355839422 I0710 02:36:32.076238 1 writerlease.go:257] [1914865771] Lease owner or electing, running 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.116370 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-2 I0710 02:36:32.310162 1 writerlease.go:290] [1914865771] Completed work for 074d4347-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.310143576 +0000 UTC m=+67.589823330 I0710 02:36:32.310208 1 writerlease.go:257] [1914865771] Lease owner or electing, running 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.364718 1 router.go:115] changing route first.example.com to route-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.364747 1 router.go:115] changing route second.example.com to route-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.364798 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.364818 1 router_controller.go:238] Alias: route-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.364825 1 router_controller.go:240] Path: /Letter I0710 02:36:32.364832 1 router_controller.go:242] Event: MODIFIED rv=93259 I0710 02:36:32.378607 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-override-domain-1 I0710 02:36:32.378635 1 writerlease.go:290] [1914865771] Completed work for 075474c8-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.378626547 +0000 UTC m=+67.658306106 I0710 02:36:32.378654 1 writerlease.go:257] [1914865771] Lease owner or electing, running 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.650698 1 router.go:115] changing route y.a.null.ptr to route-override-domain-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.650734 1 router.go:115] changing route main.void.str to route-override-domain-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.734218 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-override-domain-2 I0710 02:36:32.735061 1 writerlease.go:290] [1914865771] Completed work for 075b890c-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.735046665 +0000 UTC m=+68.014726399 I0710 02:36:32.735126 1 writerlease.go:257] [1914865771] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.735173 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-1 I0710 02:36:32.735229 1 writerlease.go:290] [1914865771] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.735046665 +0000 UTC m=+68.014726399 I0710 02:36:32.740828 1 router.go:354] Reloading the router I0710 02:36:32.744633 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-2 -> endpoints 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.746573 1 router_controller.go:238] Alias: route-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.746799 1 router_controller.go:240] Path: /Letter I0710 02:36:32.747664 1 router_controller.go:242] Event: MODIFIED rv=93260 I0710 02:36:32.747970 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-1 -> endpoints 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.748803 1 router_controller.go:238] Alias: route-override-domain-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.748855 1 router_controller.go:240] Path: /Letter I0710 02:36:32.748949 1 router_controller.go:242] Event: MODIFIED rv=93262 I0710 02:36:32.749117 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-2 -> endpoints 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.749364 1 router_controller.go:238] Alias: route-override-domain-2-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:32.749487 1 router_controller.go:240] Path: /Letter I0710 02:36:32.749519 1 router_controller.go:242] Event: MODIFIED rv=93263 I0710 02:36:32.749646 1 writerlease.go:257] [1914865771] Lease owner or electing, running 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.750262 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-2 I0710 02:36:32.750374 1 writerlease.go:290] [1914865771] Completed work for 074d4347-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.735046665 +0000 UTC m=+68.014726399 I0710 02:36:32.751319 1 writerlease.go:257] [1914865771] Lease owner or electing, running 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.756080 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-override-domain-1 I0710 02:36:32.756101 1 writerlease.go:290] [1914865771] Completed work for 075474c8-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.735046665 +0000 UTC m=+68.014726399 I0710 02:36:32.756119 1 writerlease.go:257] [1914865771] Lease owner or electing, running 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.756137 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-override-domain-2 I0710 02:36:32.756146 1 writerlease.go:290] [1914865771] Completed work for 075b890c-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.735046665 +0000 UTC m=+68.014726399 I0710 02:36:33.655355 1 reaper.go:24] Signal received: child exited I0710 02:36:33.655400 1 reaper.go:32] Reaped process with pid 21 I0710 02:36:33.731347 1 router.go:115] changing route first.example.com to route-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:33.731404 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:33.731415 1 router_controller.go:238] Alias: route-1-e2e-test-router-scoped-5l86w.myapps.mycompany.com I0710 02:36:33.731423 1 router_controller.go:240] Path: /Letter I0710 02:36:33.731429 1 router_controller.go:242] Event: MODIFIED rv=93268 I0710 02:36:33.731516 1 writerlease.go:257] [1914865771] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:33.731539 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-1 I0710 02:36:33.731551 1 writerlease.go:290] [1914865771] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:32.735046665 +0000 UTC m=+68.014726399 I0710 02:36:34.903463 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). I0710 02:36:35.009103 1 reaper.go:24] Signal received: child exited I0710 02:36:37.002107 1 router.go:333] Writing the router state I0710 02:36:37.012953 1 router.go:340] Writing the router config I0710 02:36:37.079117 1 router.go:397] Committing router certificate manager changes... I0710 02:36:37.086045 1 router.go:402] Router certificate manager config committed I0710 02:36:37.365735 1 router.go:354] Reloading the router I0710 02:36:37.562166 1 reaper.go:24] Signal received: child exited I0710 02:36:37.562278 1 reaper.go:32] Reaped process with pid 34 I0710 02:36:37.589132 1 reaper.go:24] Signal received: child exited I0710 02:36:37.589184 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). <----end of log for "router-override"/"router" Jul 9 19:39:38.991: INFO: Running 'oc describe --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-5l86w pod/router-override-domains' Jul 9 19:39:39.777: INFO: Describing pod "router-override-domains" Name: router-override-domains Namespace: e2e-test-router-scoped-5l86w Node: ip-10-0-130-54.us-west-2.compute.internal/10.0.130.54 Start Time: Mon, 09 Jul 2018 19:36:20 -0700 Labels: test=router-override-domains Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=anyuid Status: Running IP: 10.2.2.69 Containers: router: Container ID: docker://11d14680d39f89057a57aac769d3b177bc996da72afa83a277e575096ac9ad8c Image: openshift/origin-haproxy-router Image ID: docker-pullable://openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 Ports: 80/TCP, 443/TCP, 1936/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Args: --name=test-override-domains --namespace=$(POD_NAMESPACE) --loglevel=4 --override-domains=null.ptr,void.str --hostname-template=${name}-${namespace}.apps.veto.test State: Running Started: Mon, 09 Jul 2018 19:36:24 -0700 Ready: True Restart Count: 0 Environment: POD_NAMESPACE: e2e-test-router-scoped-5l86w (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-nftbd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-nftbd: Type: Secret (a volume populated by a Secret) SecretName: default-token-nftbd Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned e2e-test-router-scoped-5l86w/router-override-domains to ip-10-0-130-54.us-west-2.compute.internal Normal Pulled 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Container image "openshift/origin-haproxy-router" already present on machine Normal Created 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Created container Normal Started 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Started container Jul 9 19:39:39.777: INFO: Running 'oc logs --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-5l86w pod/router-override-domains -c router -n e2e-test-router-scoped-5l86w' Jul 9 19:39:40.106: INFO: Log for pod "router-override-domains"/"router" ----> I0710 02:36:26.424743 1 template.go:244] Starting template router (v3.11.0-alpha.0+cd9faee-274) I0710 02:36:26.436056 1 merged_client_builder.go:122] Using in-cluster configuration I0710 02:36:26.437215 1 merged_client_builder.go:122] Using in-cluster configuration I0710 02:36:26.437787 1 reflector.go:202] Starting reflector *core.Service (30m0s) from github.com/openshift/origin/pkg/router/template/service_lookup.go:32 I0710 02:36:26.437817 1 reflector.go:240] Listing and watching *core.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:32 I0710 02:36:26.451578 1 router.go:154] Creating a new template router, writing to /var/lib/haproxy/router I0710 02:36:26.451689 1 router.go:228] Template router will coalesce reloads within 5s of each other I0710 02:36:26.451717 1 router.go:278] Router default cert from router container I0710 02:36:26.451726 1 router.go:215] Reading persisted state I0710 02:36:26.451770 1 router.go:219] Committing state I0710 02:36:26.451779 1 router.go:333] Writing the router state I0710 02:36:26.455324 1 router.go:340] Writing the router config I0710 02:36:26.455337 1 router.go:397] Committing router certificate manager changes... I0710 02:36:26.455345 1 router.go:402] Router certificate manager config committed I0710 02:36:26.460229 1 router.go:354] Reloading the router I0710 02:36:27.479890 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). I0710 02:36:27.479922 1 router.go:250] Router is only using resources in namespace e2e-test-router-scoped-5l86w I0710 02:36:27.538865 1 reflector.go:202] Starting reflector *core.Endpoints (30m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:27.538891 1 reflector.go:240] Listing and watching *core.Endpoints from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:27.661242 1 reflector.go:202] Starting reflector *route.Route (30m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:27.670528 1 reflector.go:240] Listing and watching *route.Route from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:27.883891 1 router.go:115] changing route y.a.null.ptr to route-override-domain-1-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:27.883923 1 router.go:115] changing route main.void.str to route-override-domain-2-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:28.193150 1 shared_informer.go:123] caches populated I0710 02:36:28.587338 1 shared_informer.go:123] caches populated I0710 02:36:28.587396 1 plugin.go:168] Processing 1 Endpoints for e2e-test-router-scoped-5l86w/endpoints (ADDED) I0710 02:36:28.587413 1 plugin.go:171] Subset 0 : core.EndpointSubset{Addresses:[]core.EndpointAddress{core.EndpointAddress{IP:"10.2.2.72", Hostname:"", NodeName:(*string)(0xc421171680), TargetRef:(*core.ObjectReference)(0xc4206e9dc0)}}, NotReadyAddresses:[]core.EndpointAddress(nil), Ports:[]core.EndpointPort{core.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0710 02:36:28.587467 1 plugin.go:180] Modifying endpoints for e2e-test-router-scoped-5l86w/endpoints I0710 02:36:28.587509 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:28.587528 1 router_controller.go:238] Alias: first.example.com I0710 02:36:28.587536 1 router_controller.go:240] Path: /Letter I0710 02:36:28.587543 1 router_controller.go:242] Event: ADDED rv=93187 I0710 02:36:28.587644 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-1 I0710 02:36:28.587665 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-2 -> endpoints 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:28.587673 1 router_controller.go:238] Alias: second.example.com I0710 02:36:28.587687 1 router_controller.go:240] Path: /Letter I0710 02:36:28.587694 1 router_controller.go:242] Event: ADDED rv=93188 I0710 02:36:28.587731 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-2 I0710 02:36:28.587748 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-1 -> endpoints 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:28.587756 1 router_controller.go:238] Alias: route-override-domain-1-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:28.587763 1 router_controller.go:240] Path: /Letter I0710 02:36:28.587769 1 router_controller.go:242] Event: ADDED rv=93189 I0710 02:36:28.587805 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-override-domain-1 I0710 02:36:28.587821 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-2 -> endpoints 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:28.587829 1 router_controller.go:238] Alias: route-override-domain-2-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:28.587836 1 router_controller.go:240] Path: /Letter I0710 02:36:28.587842 1 router_controller.go:242] Event: ADDED rv=93197 I0710 02:36:28.587880 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-override-domain-2 I0710 02:36:28.587990 1 router_controller.go:50] Running router controller I0710 02:36:28.588008 1 router_controller.go:255] Router first sync complete I0710 02:36:28.602076 1 router.go:313] Router state synchronized for the first time I0710 02:36:28.602098 1 reaper.go:17] Launching reaper I0710 02:36:28.602182 1 writerlease.go:257] [1457571549] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:28.602960 1 plugin.go:168] Processing 1 Endpoints for e2e-test-router-scoped-5l86w/endpoints (ADDED) I0710 02:36:28.602987 1 plugin.go:171] Subset 0 : core.EndpointSubset{Addresses:[]core.EndpointAddress{core.EndpointAddress{IP:"10.2.2.72", Hostname:"", NodeName:(*string)(0xc421171680), TargetRef:(*core.ObjectReference)(0xc4206e9dc0)}}, NotReadyAddresses:[]core.EndpointAddress(nil), Ports:[]core.EndpointPort{core.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0710 02:36:28.617439 1 plugin.go:180] Modifying endpoints for e2e-test-router-scoped-5l86w/endpoints I0710 02:36:28.617464 1 router.go:751] Ignoring change for e2e-test-router-scoped-5l86w/endpoints, endpoints are the same I0710 02:36:28.617506 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:28.617524 1 router_controller.go:238] Alias: first.example.com I0710 02:36:28.617531 1 router_controller.go:240] Path: /Letter I0710 02:36:28.617537 1 router_controller.go:242] Event: ADDED rv=93187 I0710 02:36:28.622082 1 router.go:333] Writing the router state I0710 02:36:28.697129 1 router.go:340] Writing the router config I0710 02:36:28.697166 1 router.go:397] Committing router certificate manager changes... I0710 02:36:28.697175 1 router.go:402] Router certificate manager config committed I0710 02:36:28.799645 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-1 I0710 02:36:28.799683 1 writerlease.go:290] [1457571549] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:28.799671683 +0000 UTC m=+64.389230557 I0710 02:36:28.799725 1 writerlease.go:257] [1457571549] Lease owner or electing, running 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:29.473788 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-2 I0710 02:36:29.473818 1 writerlease.go:290] [1457571549] Completed work for 074d4347-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.473807704 +0000 UTC m=+65.063366590 I0710 02:36:29.473868 1 writerlease.go:257] [1457571549] Lease owner or electing, running 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:29.537561 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-override-domain-1 I0710 02:36:29.537591 1 writerlease.go:290] [1457571549] Completed work for 075474c8-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.537583074 +0000 UTC m=+65.127141910 I0710 02:36:29.616659 1 writerlease.go:257] [1457571549] Lease owner or electing, running 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:29.637831 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-override-domain-2 I0710 02:36:29.637855 1 writerlease.go:290] [1457571549] Completed work for 075b890c-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:29.763964 1 router.go:115] changing route y.a.null.ptr to route-override-domain-1-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:29.763990 1 router.go:115] changing route main.void.str to route-override-domain-2-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:29.892128 1 router.go:354] Reloading the router I0710 02:36:30.213543 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-2 -> endpoints 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.213573 1 router_controller.go:238] Alias: second.example.com I0710 02:36:30.213581 1 router_controller.go:240] Path: /Letter I0710 02:36:30.213589 1 router_controller.go:242] Event: ADDED rv=93188 I0710 02:36:30.213632 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-1 -> endpoints 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.213641 1 router_controller.go:238] Alias: route-override-domain-1-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:30.213648 1 router_controller.go:240] Path: /Letter I0710 02:36:30.213655 1 router_controller.go:242] Event: ADDED rv=93189 I0710 02:36:30.213708 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-2 -> endpoints 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.213718 1 router_controller.go:238] Alias: route-override-domain-2-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:30.213725 1 router_controller.go:240] Path: /Letter I0710 02:36:30.213731 1 router_controller.go:242] Event: ADDED rv=93197 I0710 02:36:30.213757 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.213766 1 router_controller.go:238] Alias: first.example.com I0710 02:36:30.213773 1 router_controller.go:240] Path: /Letter I0710 02:36:30.213779 1 router_controller.go:242] Event: MODIFIED rv=93248 I0710 02:36:30.213882 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-2 -> endpoints 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.213893 1 router_controller.go:238] Alias: second.example.com I0710 02:36:30.213900 1 router_controller.go:240] Path: /Letter I0710 02:36:30.213906 1 router_controller.go:242] Event: MODIFIED rv=93249 I0710 02:36:30.213948 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-1 -> endpoints 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.213957 1 router_controller.go:238] Alias: route-override-domain-1-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:30.213964 1 router_controller.go:240] Path: /Letter I0710 02:36:30.213971 1 router_controller.go:242] Event: MODIFIED rv=93252 I0710 02:36:30.490158 1 writerlease.go:257] [1457571549] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.490202 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-1 I0710 02:36:30.490217 1 writerlease.go:290] [1457571549] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:30.490246 1 writerlease.go:257] [1457571549] Lease owner or electing, running 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.490262 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-2 I0710 02:36:30.695338 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-2 -> endpoints 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.711165 1 router_controller.go:238] Alias: route-override-domain-2-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:30.711175 1 router_controller.go:240] Path: /Letter I0710 02:36:30.711182 1 router_controller.go:242] Event: MODIFIED rv=93253 I0710 02:36:30.711296 1 writerlease.go:290] [1457571549] Completed work for 074d4347-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:30.711334 1 writerlease.go:257] [1457571549] Lease owner or electing, running 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.711358 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-override-domain-1 I0710 02:36:30.711370 1 writerlease.go:290] [1457571549] Completed work for 075474c8-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:30.715095 1 writerlease.go:257] [1457571549] Lease owner or electing, running 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:30.715120 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-override-domain-2 I0710 02:36:30.715130 1 writerlease.go:290] [1457571549] Completed work for 075b890c-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:31.822132 1 reaper.go:24] Signal received: child exited I0710 02:36:31.822179 1 reaper.go:32] Reaped process with pid 21 I0710 02:36:32.729714 1 router.go:115] changing route y.a.null.ptr to route-override-domain-1-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:32.736729 1 router.go:115] changing route main.void.str to route-override-domain-2-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:32.736168 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.740894 1 router_controller.go:238] Alias: first.example.com I0710 02:36:32.741032 1 router_controller.go:240] Path: /Letter I0710 02:36:32.741211 1 router_controller.go:242] Event: MODIFIED rv=93259 I0710 02:36:32.741317 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-2 -> endpoints 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.742093 1 router_controller.go:238] Alias: second.example.com I0710 02:36:32.742148 1 router_controller.go:240] Path: /Letter I0710 02:36:32.742184 1 router_controller.go:242] Event: MODIFIED rv=93260 I0710 02:36:32.742290 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-1 -> endpoints 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.742069 1 writerlease.go:257] [1457571549] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.744084 1 router_controller.go:238] Alias: route-override-domain-1-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:32.744259 1 router_controller.go:240] Path: /Letter I0710 02:36:32.744307 1 router_controller.go:242] Event: MODIFIED rv=93262 I0710 02:36:32.744458 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-override-domain-2 -> endpoints 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.744893 1 router_controller.go:238] Alias: route-override-domain-2-e2e-test-router-scoped-5l86w.apps.veto.test I0710 02:36:32.744166 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-1 I0710 02:36:32.746076 1 router_controller.go:240] Path: /Letter I0710 02:36:32.746411 1 router_controller.go:242] Event: MODIFIED rv=93263 I0710 02:36:32.746241 1 writerlease.go:290] [1457571549] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:32.747875 1 writerlease.go:257] [1457571549] Lease owner or electing, running 074d4347-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.748144 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-2 I0710 02:36:32.748899 1 writerlease.go:290] [1457571549] Completed work for 074d4347-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:32.749175 1 writerlease.go:257] [1457571549] Lease owner or electing, running 075474c8-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.749234 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-override-domain-1 I0710 02:36:32.749317 1 writerlease.go:290] [1457571549] Completed work for 075474c8-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:32.749560 1 writerlease.go:257] [1457571549] Lease owner or electing, running 075b890c-83ea-11e8-aa51-0af96768d57e I0710 02:36:32.750083 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-override-domain-2 I0710 02:36:32.750322 1 writerlease.go:290] [1457571549] Completed work for 075b890c-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:33.070896 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). I0710 02:36:33.070985 1 reaper.go:24] Signal received: child exited I0710 02:36:33.661005 1 router.go:333] Writing the router state I0710 02:36:33.668274 1 router.go:340] Writing the router config I0710 02:36:33.668303 1 router.go:397] Committing router certificate manager changes... I0710 02:36:33.668312 1 router.go:402] Router certificate manager config committed I0710 02:36:33.984657 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:34.007099 1 router_controller.go:238] Alias: first.example.com I0710 02:36:34.007116 1 router_controller.go:240] Path: /Letter I0710 02:36:34.007124 1 router_controller.go:242] Event: MODIFIED rv=93268 I0710 02:36:34.007453 1 writerlease.go:257] [1457571549] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:34.007502 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-1 I0710 02:36:34.007516 1 writerlease.go:290] [1457571549] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:29.63784577 +0000 UTC m=+65.227404608 I0710 02:36:34.377714 1 router.go:354] Reloading the router I0710 02:36:36.292058 1 reaper.go:24] Signal received: child exited I0710 02:36:36.292112 1 reaper.go:32] Reaped process with pid 34 I0710 02:36:36.435127 1 reaper.go:24] Signal received: child exited I0710 02:36:36.435243 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). <----end of log for "router-override-domains"/"router" Jul 9 19:39:40.107: INFO: Running 'oc describe --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-5l86w pod/router-scoped' Jul 9 19:39:40.468: INFO: Describing pod "router-scoped" Name: router-scoped Namespace: e2e-test-router-scoped-5l86w Node: ip-10-0-130-54.us-west-2.compute.internal/10.0.130.54 Start Time: Mon, 09 Jul 2018 19:36:20 -0700 Labels: test=router-scoped Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/scc=anyuid Status: Running IP: 10.2.2.70 Containers: router: Container ID: docker://9304a6271be6409d409329155dddc1da46fe3b3829ec4d0e270dfeabc4413b5a Image: openshift/origin-haproxy-router Image ID: docker-pullable://openshift/origin-haproxy-router@sha256:485fa86ac97b0d289411b3216fb8970989cd580817ebb5fcbb0f83a6dc2466f5 Ports: 80/TCP, 443/TCP, 1936/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Args: --name=test-scoped --namespace=$(POD_NAMESPACE) --loglevel=4 --labels=select=first State: Running Started: Mon, 09 Jul 2018 19:36:24 -0700 Ready: True Restart Count: 0 Environment: POD_NAMESPACE: e2e-test-router-scoped-5l86w (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-nftbd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-nftbd: Type: Secret (a volume populated by a Secret) SecretName: default-token-nftbd Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned e2e-test-router-scoped-5l86w/router-scoped to ip-10-0-130-54.us-west-2.compute.internal Normal Pulled 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Container image "openshift/origin-haproxy-router" already present on machine Normal Created 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Created container Normal Started 3m kubelet, ip-10-0-130-54.us-west-2.compute.internal Started container Jul 9 19:39:40.468: INFO: Running 'oc logs --config=/home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig --namespace=e2e-test-router-scoped-5l86w pod/router-scoped -c router -n e2e-test-router-scoped-5l86w' Jul 9 19:39:40.742: INFO: Log for pod "router-scoped"/"router" ----> I0710 02:36:29.776005 1 template.go:244] Starting template router (v3.11.0-alpha.0+cd9faee-274) I0710 02:36:29.786162 1 merged_client_builder.go:122] Using in-cluster configuration I0710 02:36:29.827389 1 merged_client_builder.go:122] Using in-cluster configuration I0710 02:36:29.831236 1 reflector.go:202] Starting reflector *core.Service (30m0s) from github.com/openshift/origin/pkg/router/template/service_lookup.go:32 I0710 02:36:29.831273 1 reflector.go:240] Listing and watching *core.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:32 I0710 02:36:30.783916 1 router.go:154] Creating a new template router, writing to /var/lib/haproxy/router I0710 02:36:30.921203 1 router.go:228] Template router will coalesce reloads within 5s of each other I0710 02:36:30.921245 1 router.go:278] Router default cert from router container I0710 02:36:30.921254 1 router.go:215] Reading persisted state I0710 02:36:30.921289 1 router.go:219] Committing state I0710 02:36:30.921298 1 router.go:333] Writing the router state I0710 02:36:30.921783 1 router.go:340] Writing the router config I0710 02:36:30.921796 1 router.go:397] Committing router certificate manager changes... I0710 02:36:30.921803 1 router.go:402] Router certificate manager config committed I0710 02:36:30.963810 1 router.go:354] Reloading the router I0710 02:36:33.050233 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). I0710 02:36:33.050273 1 router.go:250] Router is only using resources in namespace e2e-test-router-scoped-5l86w I0710 02:36:33.077421 1 reflector.go:202] Starting reflector *core.Endpoints (30m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:33.077449 1 reflector.go:240] Listing and watching *core.Endpoints from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:33.077930 1 reflector.go:202] Starting reflector *route.Route (30m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:33.077954 1 reflector.go:240] Listing and watching *route.Route from github.com/openshift/origin/pkg/router/controller/factory/factory.go:111 I0710 02:36:33.378101 1 shared_informer.go:123] caches populated I0710 02:36:33.593068 1 shared_informer.go:123] caches populated I0710 02:36:33.593112 1 plugin.go:168] Processing 1 Endpoints for e2e-test-router-scoped-5l86w/endpoints (ADDED) I0710 02:36:33.593129 1 plugin.go:171] Subset 0 : core.EndpointSubset{Addresses:[]core.EndpointAddress{core.EndpointAddress{IP:"10.2.2.72", Hostname:"", NodeName:(*string)(0xc4200b9b80), TargetRef:(*core.ObjectReference)(0xc4208923f0)}}, NotReadyAddresses:[]core.EndpointAddress(nil), Ports:[]core.EndpointPort{core.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0710 02:36:33.593205 1 plugin.go:180] Modifying endpoints for e2e-test-router-scoped-5l86w/endpoints I0710 02:36:33.593243 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:33.593260 1 router_controller.go:238] Alias: first.example.com I0710 02:36:33.593267 1 router_controller.go:240] Path: /Letter I0710 02:36:33.593274 1 router_controller.go:242] Event: ADDED rv=93259 I0710 02:36:33.593360 1 router.go:695] Adding route e2e-test-router-scoped-5l86w/route-1 I0710 02:36:33.593449 1 router_controller.go:50] Running router controller I0710 02:36:33.593465 1 router_controller.go:255] Router first sync complete I0710 02:36:33.593477 1 router.go:313] Router state synchronized for the first time I0710 02:36:33.593500 1 reaper.go:17] Launching reaper I0710 02:36:33.600113 1 plugin.go:168] Processing 1 Endpoints for e2e-test-router-scoped-5l86w/endpoints (ADDED) I0710 02:36:33.600137 1 plugin.go:171] Subset 0 : core.EndpointSubset{Addresses:[]core.EndpointAddress{core.EndpointAddress{IP:"10.2.2.72", Hostname:"", NodeName:(*string)(0xc4200b9b80), TargetRef:(*core.ObjectReference)(0xc4208923f0)}}, NotReadyAddresses:[]core.EndpointAddress(nil), Ports:[]core.EndpointPort{core.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0710 02:36:33.600213 1 plugin.go:180] Modifying endpoints for e2e-test-router-scoped-5l86w/endpoints I0710 02:36:33.600239 1 router.go:751] Ignoring change for e2e-test-router-scoped-5l86w/endpoints, endpoints are the same I0710 02:36:33.600287 1 writerlease.go:257] [471831275] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:33.690212 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:33.826061 1 router_controller.go:238] Alias: first.example.com I0710 02:36:33.826083 1 router_controller.go:240] Path: /Letter I0710 02:36:33.826091 1 router_controller.go:242] Event: ADDED rv=93259 I0710 02:36:33.721104 1 router.go:333] Writing the router state I0710 02:36:33.826372 1 router.go:340] Writing the router config I0710 02:36:33.826397 1 router.go:397] Committing router certificate manager changes... I0710 02:36:33.826406 1 router.go:402] Router certificate manager config committed I0710 02:36:33.825881 1 status.go:152] admit: updated status of e2e-test-router-scoped-5l86w/route-1 I0710 02:36:33.990138 1 writerlease.go:290] [471831275] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:33.99011454 +0000 UTC m=+69.624225244 I0710 02:36:34.184093 1 router_controller.go:237] Processing route: e2e-test-router-scoped-5l86w/route-1 -> endpoints 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:34.184119 1 router_controller.go:238] Alias: first.example.com I0710 02:36:34.184127 1 router_controller.go:240] Path: /Letter I0710 02:36:34.184134 1 router_controller.go:242] Event: MODIFIED rv=93268 I0710 02:36:34.184234 1 writerlease.go:257] [471831275] Lease owner or electing, running 0744cc19-83ea-11e8-aa51-0af96768d57e I0710 02:36:34.184278 1 status.go:134] admit: no changes to route needed: e2e-test-router-scoped-5l86w/route-1 I0710 02:36:34.184291 1 writerlease.go:290] [471831275] Completed work for 0744cc19-83ea-11e8-aa51-0af96768d57e in state=1 tick=0 expires=2018-07-10 02:37:33.99011454 +0000 UTC m=+69.624225244 I0710 02:36:34.184374 1 router.go:354] Reloading the router I0710 02:36:36.399091 1 reaper.go:24] Signal received: child exited I0710 02:36:36.399194 1 reaper.go:32] Reaped process with pid 21 I0710 02:36:36.408405 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). I0710 02:36:36.434429 1 reaper.go:24] Signal received: child exited I0710 02:36:38.608068 1 router.go:333] Writing the router state I0710 02:36:38.608320 1 router.go:340] Writing the router config I0710 02:36:38.608348 1 router.go:397] Committing router certificate manager changes... I0710 02:36:38.608357 1 router.go:402] Router certificate manager config committed I0710 02:36:38.762488 1 router.go:354] Reloading the router I0710 02:36:39.485912 1 reaper.go:24] Signal received: child exited I0710 02:36:39.485954 1 reaper.go:32] Reaped process with pid 34 I0710 02:36:39.490122 1 router.go:454] Router reloaded: - Checking http://localhost:80 ... - Health check ok : 0 retry attempt(s). I0710 02:36:39.490171 1 reaper.go:24] Signal received: child exited <----end of log for "router-scoped"/"router" [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/cli.go:72 STEP: Deleting namespaces Jul 9 19:39:40.824: INFO: namespace : e2e-test-router-scoped-5l86w api call to delete is complete STEP: Waiting for namespaces to vanish [AfterEach] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142 STEP: Dumping a list of prepulled images on each node... Jul 9 19:39:52.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [215.345 seconds] [Conformance][Area:Networking][Feature:Router] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:26 The HAProxy router /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:67 should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] [It] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:68 Expected error: <*errors.errorString | 0xc4205a80b0>: { s: "last response from server was not 200:\n", } last response from server was not 200: not to have occurred /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:96 ------------------------------ Jul 9 19:39:52.953: INFO: Running AfterSuite actions on all node Jul 9 19:39:52.954: INFO: Running AfterSuite actions on node 1 Summarizing 18 Failures: [Fail] [Conformance][templates] templateinstance impersonation tests [It] should pass impersonation update tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:322 [Fail] [k8s.io] Sysctls [It] should reject invalid sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:177 [Fail] [Conformance][templates] templateinstance impersonation tests [It] should pass impersonation creation tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:241 [Fail] [Conformance][templates] templateservicebroker security test [BeforeEach] should pass security tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:52 [Fail] [k8s.io] Sysctls [It] should support sysctls [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:98 [Fail] [k8s.io] KubeletManagedEtcHosts [It] should test kubelet managed /etc/hosts file [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/exec_util.go:104 [Fail] [sig-storage] HostPath [It] should support subPath [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:2290 [Fail] [Feature:Builds][Conformance] imagechangetriggers [It] imagechangetriggers should trigger builds of all types [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/imagechangetriggers.go:44 [Fail] [Feature:Prometheus][Feature:Builds] Prometheus when installed to the cluster [It] should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:160 [Fail] [Conformance][templates] templateservicebroker end-to-end test [BeforeEach] should pass an end-to-end test [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:63 [Fail] [k8s.io] Sysctls [It] should not launch unsafe, but not explicitly enabled sysctls on the node [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:207 [Fail] DNS [It] should answer endpoint and wildcard queries for the cluster [Conformance] [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/dns/dns.go:233 [Fail] [Feature:Prometheus][Conformance] Prometheus when installed to the cluster [It] should start and expose a secured proxy and unsecured metrics [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:117 [Fail] [Feature:Builds][Conformance][valueFrom] process valueFrom in build strategy environment variables [It] should successfully resolve valueFrom in docker build environment variables [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/builds/valuefrom.go:95 [Fail] [Conformance][templates] templateservicebroker bind test [BeforeEach] should pass bind tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:46 [Fail] [k8s.io] Sysctls [It] should support unsafe sysctls which are actually whitelisted [Suite:openshift/conformance/parallel] [Suite:k8s] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:139 [Fail] [Conformance][templates] templateinstance security tests [It] should pass security tests [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_security.go:273 [Fail] [Conformance][Area:Networking][Feature:Router] The HAProxy router [It] should serve the correct routes when scoped to a single namespace and label set [Suite:openshift/conformance/parallel] /home/yifan/gopher/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:96 Ran 213 of 447 Specs in 1869.172 seconds FAIL! -- 195 Passed | 18 Failed | 0 Pending | 234 Skipped Ginkgo ran 1 suite in 31m10.079080769s Test Suite Failed [INFO] [19:39:52-0700] Running serial tests I0709 19:39:53.224466 20851 test.go:86] Extended test version v3.10.0-alpha.0+e63afaa-1228-dirty DEBUG: outputdir= I0709 19:39:54.622813 21787 test.go:86] Extended test version v3.10.0-alpha.0+e63afaa-1228-dirty Running Suite: Extended ======================= Random Seed: 1531190394 - Will randomize all specs Will run 0 of 447 specs I0709 19:39:54.688951 21787 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. Jul 9 19:39:54.688: INFO: >>> kubeConfig: /home/yifan/tectonic-installer/yifan-test-cluster/generated/auth/kubeconfig Jul 9 19:39:54.692: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Jul 9 19:39:55.039: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 9 19:39:55.423: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 9 19:39:55.424: INFO: expected 7 pod replicas in namespace 'kube-system', 7 are Running and Ready. Jul 9 19:39:55.466: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Jul 9 19:39:55.466: INFO: Dumping network health container logs from all nodes... Jul 9 19:39:55.509: INFO: e2e test version: v1.10.0+b81c8f8 Jul 9 19:39:55.548: INFO: kube-apiserver version: v1.11.0+d4cacc0 I0709 19:39:55.548774 21787 e2e.go:56] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJul 9 19:39:55.553: INFO: Running AfterSuite actions on all node Jul 9 19:39:55.553: INFO: Running AfterSuite actions on node 1 Ran 0 of 447 Specs in 0.865 seconds SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 447 Skipped PASS Ginkgo ran 1 suite in 1.15450203s Test Suite Passed [INFO] [19:39:55-0700] [CLEANUP] Beginning cleanup routines... [INFO] [19:39:55-0700] [CLEANUP] Dumping cluster events to _output/scripts/conformance/artifacts/events.txt Logged into "https://yifan-test-cluster-api.coreservices.team.coreos.systems:6443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project ': * default kube-public kube-system openshift openshift-infra openshift-node openshift-web-console tectonic-ingress tectonic-system Using project "default". [INFO] [19:39:57-0700] [CLEANUP] Dumping container logs to _output/scripts/conformance/logs/containers [INFO] [19:39:57-0700] [CLEANUP] Truncating log files over 200M [INFO] [19:39:57-0700] [CLEANUP] Stopping docker containers [INFO] [19:39:57-0700] [CLEANUP] Removing docker containers [INFO] [19:39:57-0700] [CLEANUP] Killing child processes [ERROR] [19:39:57-0700] /home/yifan/gopher/src/github.com/openshift/origin/test/extended/conformance.sh exited with code 1 after 00h 31m 16s